I think humans have some kind of algorithm for deciding what's true and consolidating information. What that is I don't know.
I think humans have some kind of algorithm for deciding what's true and consolidating information. What that is I don't know.
This paper is about the limits in current systems.
Ai currently has issues with seeing what's missing. Seeing the negative space.
When dealing with complex codebases you are newly exposed to you tackle an issue from multiple angles. You look at things from data structures, code execution paths, basically humans clearly have some pressure to go, fuck, I think I lost the plot, and then approach it from another paradigm or try to narrow scope, or based on the increased information the ability to isolate the core place edits need to be made to achieve something.
Basically the ability to say, "this has stopped making sense" and stop or change approach.
Also, we clearly do path exploration and semantic compression in our sleep.
We also have the ability to transliterate data between semantic to visual structures, time series, light algorithms (but not exponential algorithms, we have a known blindspot there).
Humans are better at seeing what's missing, better at not closuring, better at reducing scope using many different approaches and because we operate in linear time and there are a lot of very different agents we collectively nibble away at complex problems over time.
I mean on a 1:1 teleomere basis, due to structural differences people can be as low as 93% similar genetically.
We also have different brain structures, I assume they don't all function on a single algorithmic substrate, visual reasoning about words, semantic reasoning about colors, synesthesia, the weird handoff between hemispheres, parts of our brain that handle logic better, parts of our brain that handle illogic better. We can introspect on our own semantic saturation, we can introspect that we've lost the plot. We get weird feelings when something seems missing logically, we can dive on that part and then zoom back out.
There's a whole bunch of shit the brain does because it has a plurality of structures to handle different types of data processing and even then the message type used seems flexible enough that you can shove word data into a visual processor part and see what falls out, and this happens without us thinking about it explicitly.
Yep definitely agree with this.
I guess so too... but whatever it is: it cannot possibly be something algorithmic. Therefore it doesn't matter in terms of demonstrating that AI has a boundary there, that cannot be transcended by tech, compute, training, data etc.
Explain what you mean by "algorithm" and "algorithmic". Be very precise. You are using this vague word to hinge on your entire argument and it is necessary you explain first what it means. Since from reading your replies here it is clear you are laboring under a defitnition of "algorithm" quite different from the accepted one.
Why can't it be algorithmic? If the brain uses the same process on all information, then that is an algorithmic process. There is some evidence that it does do the same process to do things like consolidating information, processing the "world model" and so on.
Some processes are undoubtedly learned from experience but considering people seem to think many of the same things and are similar in many ways it remains to be seen whether the most important parts are learned rather than innate from birth.
Why can't it be algorithmic?
Why do you think it mustn't be algoritmic?
Why do you think humans are capable of doing anything that isn't algoritmic?
This statement, and your lack of mention of the Church-Turing thesis in your papers suggests you're using a non-standard definition of "algoritmic", and your argument rests on it.