We have gradient free algorithms: Hebbian learning. Since 1949?
And there's good reasons why we use gradients today.
That's more a theory/principle, not an algorithm by itself.
It is an update rule:
Wij = f(Wij, xi, xj)
The weight of the connection between nodes i and j is modified by a function over the activations or inputs of node i and j.
The are many variants of back propagation too.
Regardless, yes it would be used within a network model such as a Hopfield network.
See also https://en.m.wikipedia.org/w/index.php?title=Generalized_Heb...
And there's good reasons why we use gradients today.
That's more a theory/principle, not an algorithm by itself.
It is an update rule:
Wij = f(Wij, xi, xj)
The weight of the connection between nodes i and j is modified by a function over the activations or inputs of node i and j.
The are many variants of back propagation too.
Regardless, yes it would be used within a network model such as a Hopfield network.
See also https://en.m.wikipedia.org/w/index.php?title=Generalized_Heb...