The LLMs are generally still pretty bad at (deductive) reasoning. IME they go along more with the things like variable names and comments than the actual program logic (it would be an interesting experiment to compare LLM's understanding of three identical programs with different identifiers, one with normal identifiers, one with obfuscated identifiers, and one with deliberately misleading identifiers). I also think this particular comparison comes down to typing, which helps to avoid LLM's reasoning go astray.
When we reason we need to typically propagate the constraints to arrive at a solution to these constraints. I think the best language to reason in could be something like Lean, which allows both constraints and actual code to be expressed at the same time. Although this might not be the case for current LLMs, as I explain above.
wait till you look inside a neural network and realize they're incapable of deductive reasoning! amazing how many devs that talk about "AI" would probably have a hard time telling apart deductive and inductive reasoning.
That's actually untrue. Yes, training a neural network is mostly inductive reasoning process. However, the ability of LLMs to reason deductively (as a chain of thought, although it's probably not the only mechanism) is an emergent phenomenon, rising up from the training it on data and problems that exhibit deductive reasoning.
But of course, because the deductive reasoning is inductively taught, there might be various shortcuts which compromise the soundness of deductive reasoning. That's why my claim - LLMs are not as good at it as other algorithms, although they have many other strengths that make up for it.
How so?