Check out feedback alignment. You provide feedback with a random static linear transformation of the loss to earlier layers, and they eventually align with the feedback matrix to enable learning.
It's certifiably insane that it works at all. And not even vaguely backprop, though if you really wanted to stretch the definition I guess you could say that the feedforward layers align to take advantage of a synthetic gradient in a way that approximates backprop.