>but obviously, there seems to be more than that.
I don't see how that's obvious. I'm not trying to be argumentative here, but it seems like these arguments always come down to a qualia, or the insistence that humans have some sort of 'spark' that machines don't have, therefore: AGI is not possible since machines don't have it.
I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?
What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of?
> I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?
It doesn't follow.
Trivially demonstrated by the early LLM that got Blake Lemonie to break his NDA also emitting words which suggested to Lemonie that the LLM had an inner life.
Or, indeed, the output device y'all are using to read/listening to my words, which is also successfully emitting these words despite the output device very much only following an algorithm that simply recreates what it was told to recreate. "Ceci n'est pas une pipe", etc. https://en.wikipedia.org/wiki/The_Treachery_of_Images
Consciousness is an issue. If you write a program to add 2+2, you probably do not believe some entity poofs into existence, perceives itself as independently adding 2+2, and then poofs out of existence. Yet somehow, the idea of an emergent consciousness is that if you instead get it to do 100 basic operations, or perhaps 2^100 then suddenly this becomes true? The reason one might believe this is not because it's logical or reasonable - or even supported in any way, but because people assume their own conclusion. In particular if one takes a physicalist view of the universe then consciousness must be a physical process and so it simply must emerge at some sufficient degree of complexity.
But if you don't simply assume physicalism then this logic falls flat. And the more we discover about the universe, the weirder things become. How insane would you sound not that long ago to suggest that time itself would move at different rates for different people at the same "time", just to maintain a perceived constancy of the speed of light? It's nonsense, but it's real. So I'm quite reluctant to assume my own conclusion on anything with regards to the nature of the universe. Even relatively 'simple' things like quantum entanglement are already posing very difficult issues for a physicalist view of the universe.
My issue is that from a scientific point of view, physicalism is all we have. Everything else is belief, or some form of faith.
Your example about relativity is good. It might have sounded insane at some point, but it turns out, it is physics, which nicely falls into the physicalism concept.
If there is a falsifiable scientific theory that there is something other than a physical mechanism behind consciousness and intelligence, I haven't seen it.
Boltzmann brains and A. J. Ayer's "There is a thought now".
Ages ago, it occurred to me that the only thing that seemed to exist without needing a creator, was maths. That 2+2 was always 4, and it still would be even if there were not 4 things to count.
Basically, I independently arrived at similar conclusion as Max Tegmark, only simpler and without his level of rigour: https://benwheatley.github.io/blog/2018/08/26-08.28.24.html
(From the quotation's date stamp, 2007, I had only finished university 6 months earlier, so don't expect anything good).
But as you'll see from my final paragraph, I no longer take this idea seriously, because anything that leads to most minds being free to believe untruths, is cognitively unstable by the same argument that applies to Boltzmann brains.
MUH leads to Aleph-1 infinite number of brains*. I'd need a reason for the probability distribution over minds to be zero almost everywhere in order for it to avoid the cognitively instability argument.
* if there is a bigger infinity, then more; but I have only basic knowledge of transfinites and am unclear if the "bigger" ones I've heard about are considered "real" or more along the lines of "if there was an infinite sequence of infinities, then…"
Oh no, I am not at all trying to find an explanation of why this is (qualia etc.). There is simply no necessity for that. It is interesting, but not part of the scientific problem that i tried to find an answer to.
The proof (all three of them) holds without any explanatory effort concerning causalities around human frame-jumping etc.
For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).
> this cannot be reached algorithmically
> humans can (somehow) do this
Is this not contradictory?
Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?
Or at minimum presupposes that humans are more than just a biochemical machine. But then the question comes up again, where is the scientific evidence for this? In my view it's perfectly acceptable if the answer is something to the effect of "we don't currently have evidence for that, but this hints that we ought to look for it".
All that said, does "algorithmically" here perhaps exclude heuristics? Many times something can be shown to be unsolvable in the absolute sense yet readily solvable with extremely high success rate in practice using some heuristic.
> Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?
No, computation is algorithmic, real machines are not necessarily (of course, AGI still can't be ruled out even if algorithmic intelligence is, only AGI that does not incorporate some component with noncomputable behavior.)
> computation is algorithmic, real machines are not necessarily
Author seems to assume the latter condition is definitive, i.e. that real machines are not, and then derive extrapolations from that unproven assumption.
> No, computation is algorithmic, real machines are not necessarily
As the adjacent comment touches on are the laws of physics (as understood to date) not possible to simulate? Can't all possible machines be simulated at least in theory? I'm guessing my knowledge of the term "algorithmic" is lacking here.
Using computation/algorithmic methods we can simulate nonalgorithmic systems. So the world within a computer program can behave in a nonalgorithmic way.
Also, one might argue that universe/laws of physics are computational.
OP seems to have a very confused idea of what an algorithmic process means... they think the process of humans determining what is truthful "cannot possibly be something algorithmic".
Which is certainly an opinion.
> whatever it is: it cannot possibly be something algorithmic
https://news.ycombinator.com/item?id=44349299
Maybe OP should have looked at a dictionary for what certain words actually mean before defining them to be something nonsensical.
> For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).
The problem with these kinds of arguments is always that they conflate two possibly related but non-equivalent kinds of computational problem solving.
In computability theory, an uncomputability result essentially only proves that it's impossible to have an algorithm that will in all cases produce the correct result to a given problem. Such an impossibility result is valuable as a purely mathematical result, but also because what computer science generally wants is a provably correct algorithm: one that will, when performed exactly, always produce the correct answer.
However, similarly to any mathematical proof, a single counter-example is enough to invalidate a proof of correctness. Showing that an algorithm fails in a single corner case makes the algorithm not correct in a classical algorithmic sense. Similarly, for a computational problem, showing that any purported algorithm will inevitably fail even in a single case is enough to prove the problem uncomputable -- again, in the classical computability theory sense.
If you cannot have an exact algorithm, for either theoretical or practical reasons, and you still want a computational method for solving the problem in practice, you then turn to heuristics or something else that doesn't guarantee correctness but which might produce workable results often enough to be useful.
Even though something like the halting problem is uncomputable in the classical, always-inevitably-produces-correct-answer-in-finite-time sense, that does not necessarily stop it from being solved in a subset of cases, or to be solved often enough by some kind of a heuristic or non-exact algorithm to be useful.
When you say that something cannot be reached algorithmically, you're saying it's impossible to have an algorithm that would inevitably, systematically, always reach that solution in finite time. And you would in many cases be correct. Symbolic AI research ran into this problem due to the uncomputability of reasoning in predicate logic. (Uncomputability is not the main problem that symbolic AI ran into but it was one of them.)
The problem is that when you say that humans can somehow do this computationally impossible thing, you're not holding human cognition or problem solving to the same standard of computational correctness. We do find solutions to problems, answers to questions, and logical chains of reasoning, but we aren't guaranteed to.
You do seem to be aware of this, of course.
But you then run into the inevitable question of what you mean by AGI. If you hold AGI to the standard of classical computational correctness, to which you don't hold humans, you're correct that it's impossible. But you have also proven nothing new.
A more typical understanding of AGI would be something similar to human cognition -- not having formal guarantees but working well enough for operating in, understanding, or producing useful results the real world. (Human brains do that well in the real world -- thanks to having evolved in it!)
In the latter case, uncomputability results do not prove that kind of AGI to be impossible.
> What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of
Iron and copper are both metals but only one can be hardened into steel
There is no reason why we should assume a silicon machine must have the same capabilities as a carbon machine
Unless you can show - even a single example would do - that we can compute a function that is outside the Turing computable set, then there is a very strong reason that we should assume a silicon machine has the same capabilities as a carbon machine to compute.
Yeah, but bronze also makes great swords… what’s the point here?