Honestly, based on where this is going, I suspect we might agree more than it appears in this short exchange.
As for the point about complexity, I will meditate on it.
My first cut is that what you're getting at is something like: you're using "complexity" in the technical sense of the word, in which case we're in the realm of so-called "wicked problems" and such. There is some kind of 'making sense' that goes beyond 'mere ideas,' or at the very least something like "throw more minds and hands at the problem," in this context, is the wrong attitude.
In such a world, a machine that amplifies human effort (while obscuring it) is not the right tool for the job—more than likely you find yourself spinning around an attractor basin.
I personally have a tentative conclusion that I specifically am able to avoid this by amplifying certain strategies I've developed over my life, but really, I don't have solid confirmation yet.
In any event, I'm enjoying the experiment, but I'll reflect on why you seem to be certain about it.
I agree and think we're mostly on the same page fwiw. But there are parts I disagree with. We are more aligned than misaligned. But I do want to clear some things up
Unfortunately complexity is a bit more complex than this and I think it's leading to some miscommunication. Even in technical settings complexity is overloaded. Something can be computationally complex but easy to understand. Something can be conceptually complex, but trivial to compute. I do really mean "complexity is complex".But I am using it in both senses. I'll make a pretty straight forward argument here. As we advance on any given topic it becomes harder to advance, right? Just take something simple like playing a video game. You start from nothing and then you get a lot better simply by figuring out the controls. But as you progress towards the "professional" level it both gets computationally harder to advance (it requires more time and training) as well as conceptually harder (as little details and the number of things you need to consider increases). You have to process more information and you have to account for more information. Another way to see this is with a mathematical example: a Taylor Series. A first order approximation is relatively easy to compute but it quickly grows as you want your accuracy to increase. And the relationship isn't linear...
I can't answer this. I'm just some dude on the internet and have no idea about you. So my response shouldn't be taken in that way.What I can do is ask an important question. Are you able to confirm you are unique or are you just tricking yourself?
Either could be true, right? I mean we put mental defenses around us all the time. There's so much crazy shit we treat as mundane. I mean just a solar flare being the size of a dozen earths and we treat it like our little minds can comprehend that and make it mundane. We like to convince ourselves that we understand things much better than we actually do. It's normal, but recognition of it is important to help us continue moving forward, right? To question what we believe. Since our understanding is never perfect, there's always more, but our brains like to feel done as that feels accomplished.
It certainly does not help that we're working with tools that are trained for human preference. I use them too. I am an AI researcher even. But I think we also need to be careful. The same objective we use to train these systems to be accurate and sound natural in their speech also maximizes their ability to deceive us. It is a really tricky thing, so I think it is also something we need to be extremely cautious about. If we don't believe we could be the fool then that makes us the biggest fool of them all, right? So how do we find out if we've been fooled or not? I don't think that's so easy and I honestly don't have an answer. There's a lot of complexity to this as you dig in and... complexity is complex.
Yeah, I know what you mean. I do my best to check myself and also run things by people I trust, but there's an ever-present risk I'm going insane.
As a test, I've been attempting an incredibly complex project that goes far beyond my abilities as a kind of deliberate worst-case-scenario. It's more or less a programming language for a very specific purpose that compiles to a custom bytecode and runs on a custom runtime with specific performance guarantees.
I've spent part of the last month iterating on a formal model of the system and various specifications. Along the way, I teach myself how to understand and critique the part of the system I'm working on, however I also deliberately keep things just beyond my understanding by opportunistically pulling in concepts from various sources ... algebraic topology, obscure corners of PL, concepts plucked from similar systems. It's a complete monstrosity with, now, hundreds of supporting documents, research spikes, processed references, critique passes, etc.
If I'm able to complete this project and have it work as expected, I think I'll have learned a lot about what is or isn't possible. If the current design does in fact work, I'm fairly confident I'll have advanced the state of the art in the niche field I'm working in.