The layers themselves are basically perceptrons, not really any different to a generalized linear model.

The ‘secret sauce’ in a deep network is the hidden layer with a non-linear activation function. Without that you could simplify all the layers to a linear model.