I don't either philosophical conceptions of consciousness or theories of computational complexity count as even "efforts to formalize intelligence". They are each focused on something significantly different.
The closest effort I know of as far characterizing intelligence as such is Steven Smale's 18th problem.
The wikipedia article is pretty useless here.
The original paper is better, but still seems to be too vauge to be useful. Where it isn't vauge it seems to point pretty strongly to computability/complexity theory.
Intelligence means many different things to different people. If we just gesture vaugely at it we aren't going to get anywhere, everyone will just talk past each other.
Yeah,
Smale is a very smart person but his stuff indeed seems as much a vague gesture as the other efforts. I feel like neural networks have succeeded primarily because of the failure of theorists/developers/etc to create any coherent theory of intelligence aside from formal logic (or Perl, formal probability). Nothing captures the ability of thinking to use very rough approximations. Nothing explains/accounts-of Moravec's Paradox etc.