> Sad for folks inside tech not interested in/working on AI either
Not true. With this much money and more coming it means other roles will benefit too. The whole tech sector will grow - maybe less than AI specific, but still.
It's better than the alternative of no or negative growth.
Some comments assume the funds exist and will be spent elsewhere in the US or the markets they refer to but maybe not. If no AI, US funds could invest in Vietnam (that is receiving a FTSE market upgrade), China, EU or just about anywhere else.
Don't assume you'd benefit by wishing AI to begone. When you wanted crypto to go away you got AI.
They say the AI bubble is 17x larger than the dot com boom. I was in the bay area for that. It was pretty amazing. Every party someone announced they were pregnant (people were so optimistic they were all starting families). There were new restaurants popping up everywhere. Run down areas being revitalized. Suddenly expensive clothing brands that were just for die hard outdoors situations became streetwear.
I can't imagine how great/amazing it must be there now with AI being 17x that. Everyone where I live is financially stressed. Living in a community that isn't would be so nice. I am sure people in the bay area would miss this current feeling of prosperity/success/optimism to start a family if the bubble burst.
> I can't imagine how great/amazing it must be there now with AI being 17x that. Everyone where I live is financially stressed.
Maybe without the AI it'd be worse? COVID over-hiring lingered for a long time. A lot of the layoffs were due to that. Perhaps without AI and the over-supply for workers things could have crashed.
The flip side is that anyone not riding the AI wave gets swept away. Schoolteachers and bartenders, bus drivers and librarians.
It becomes impossible for normal people to live in San Francisco.
Shouldn’t we all be wishing for a normal stable diversified economy with modest growth, opportunity for young people and reasonable cost of living?
I don’t think hyper growth bubble vs economic depression are the only two options.
Not so sad. I'm working in an area that is AI adjacent. We don't use AI for anything and the tech we build is not only useful for AI companies. So we'll live when that busts and we don't directly contribute to the hype. But the AI folks are crazy about our tech which shoots our business through the roof as well. So we ride it while it lasts while not having to really feel like being part of it all. Once it busts, our margins will go back to normal and that's it.
I upvoted you but strictly speaking it is not true. "AI" is such a broad term. You probably meant GenAI like LLMs, and even here there are some genuinely useful applications.
But in general, there is a lot of extremely fascinating stuff, both to exploit and explore, for example in the context of traditional (non-transforeme-based) ML/DL methods. The methods are getting better year by year, and the hardware needed to do anything useful is getting cheaper.
So while it's true that after the initial fascination developers might not be that interested in GenAI, and some even deliberately decided not to use these tools at all in order to keep their skills fresh and avoid problems related with constant review fatigue, many tech folks are interested in AI in a wider context and are getting good results.
Not the parent commenter, but why would you assume that he meant LLMs specifically?
I'm one of the "tech people not interested in AI" and I mean everything around AI/ML. I just like writing OG code man. I like architecture, algorithms, writing "feeling good" code. Like carpenters who just like to work with wood I like to work with code.
Yes, same feeling about ML really. Whether you are working with classic ML or LLMs, it's all about trial and error without predictable results, which just feels like sloppy (pun unintended) engineering by programmers' standards.
But this just doesn't correspond to reality. Most interesting algorithms in optimization etc. are metaheuristics as precise solutions are either proven to be impossible to get or we haven't found a solution yet. In the meantime, we get excellent results with "close-enough" solutions. Yes, the pedantic aspect of my soul may suffer, and we will always strive towards better and better solutions, but I guess we accepted already over a century ago that approximate solutions are extremely valuable.
I see my instructions for the LLM still as code. Just in human language and not a particular programming language. I still have to specify the algorithm, and I still have to be specific - the more fuzzy my instructions the more likely it is that I end up with having to correct the LLM afterwards.
There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.
I find it a bit ironic though that our tool out of the excessive complexity is an even more complex tool, although, looking at biology and that programming in large longer-running projects already felt like it had plenty of elements that reminded me of how evolution works in biology, already leading to hard or even impossible to comprehend systems (like https://news.ycombinator.com/item?id=18442637), the new direction is not that big of a surprise. We'll end up more like biology and medicine some day, with probabilistic methods and less direct knowledge and understanding of the ever more complex systems, and evolution of those systems based on "survival" (does what it is supposed to most of the time, we can work around the bugs, no way to debug in detail, survival of the fittest - what doesn't work is thrown away, what passes the tests is released).
Small systems that are truly "engineered" and thought through will remain valuable, but increasingly complex systems go the route shown by these new tools.
I see this development as part of a path towards being able to create and deal with ever more complex systems, not, or only partially, to replace what we have to create current ones. That AI (and what will develop out of it) can be used to create current systems too is a (for some, or many, nice) side effect, but I see the main benefit in the start of a new method to deal with ever more complexity.
I only ever see single-person or -team short-term experiences of LLM use for development. Obviously, since it is so new. But one important task of the tooling will only partially be to help that one person, or even team, to produce something that can be released. Much more important will be the long-term, like that decades-long software dev process they ended up with in my link above, with a lot of developers over time passing through still being able to extend it and fix issues years later. Right now it is solved in ways that are far from fun already, with many developers staying in those teams only long enough, or H1Bs who have little choice. If this could be done in a higher level way, with whatever "AI for software dev" will turn into over the next few decades, it could help immensely.
> There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.
I was wondering about this a lot. While it's a truism the generalities are always useful whereas the specific gets deprecated with time, I was trying to get down deeper on why certain specifics age quickly whereas other seem to last.
I came up with the following:
* A good design that allows extending or building on top of it (UNIX, Kubernetes, HTML)
* Not being owned by a single company, no matter how big (negative examples: Silverlight, Flash, Delphi)
* Doing one thing, and being excellent at it (HAproxy)
* Just being good at what needs to be done in a given epoch, gaining universality, building ecosystem, and just flowing with it (Apache, Python)
Most things in JS ecosystem are quite short-lived dead ends so if I were a frontend engineer I might consider some shortcuts with LLMs because what's the point of learning something that might not even exist a year from now? OTOH, it would be a bad architectural decision to use stuff that you can't be sure it will be supported in 5 years from now, so...
I don't doubt that the current specific products and how you use them will endure. This is the very first type of something truly better, and there still is a very long way to go. Let's see what we will have twenty years from now, while the current products still find their customers as we can see.
No, I'm talking about core principles.
You just can't go on being incredibly specific. We already tried other approaches, "4th gen" languages were a thing in the 90s already, for example. I think the current kind of more statistical and NN approach is more promising. The completely deterministic computing is harder to scale, or you introduce problems such as seen in my example link over time, or it becomes non-deterministic and horrible to debug because the bigger the system you other things dominate more and more.
Again, this won't replace smaller software like we write today, this is for larger, ever longer lasting and more complex systems, approaching bio-complexity. There is just no way to debug something huge line by line, and benefits of modularization (and separation of the parts into components easier to handle) will be undermind4ed by long-term development following changing goals.
Just look at the difference in complexity of software form a mere forty, or twenty years ago and now. The majority of software was very young, and code size was measured in low mega-bytes. The systems explode in size, scale and complexity, and new stuff added over time is less likely to be added cleanly. Stuff will be "hacked on" somehow and released when it passes the tests well enough, just like in my example link which was for a 1990s DB system, and it will only get worse.
We need very different tools, trying to do this with our current source code and debugging methods already is a nightmare (again, see that link and the work description). We might be better off embracing more fuzzy statistical and NN methods. We can still write smaller components in more deterministic ways of today.
One must naturally make assumptions when responding to something that is poorly defined or communicated. That's just how it is. That's an issue for the original poster, not the responder.
The terminology of AI has a strong link with LLMs/GenAI. Quite reasonable.
As for code/architecture/infrastructure I like those things too. You do have to shape your communications to the audience you are talking to though. A lot of the products have eliminated the demand for such jobs, and its a false elimination so there will be an overcorrection later in a whipsaw, but by that time I'll have changed careers because the jobs weren't there. I'm an architect, with 10+ years of experience, not a single job offer in 2 years with tens of thousands of submissions in that time.
If there is no economic opportunity you have to go where the jobs are. When executives play stupid games based in monopoly to drive wages down, they win stupid prizes.
Sometime around 2 years is the max time-frame before you get brain drain for these specialized fields, and when that happens those people stop contributing to the overall public parts of the sector entirely. They take their expertise, and use it for themselves only, because that is the only value it can provide and there's no winning when the economy becomes delusional and divorced from reality.
You have AI eliminating demand for specialized labor that requires at least 5 years of experience to operate competently, AI flooding the communication space with jammed speech (for hiring through a mechanism similar to RNA interference), and you have professional certificate providers retiring all benefits, and long-lasting certificates that prove competency on top of the coordinated layoffs by big tech in the same time period. Eliminating the certificate path as a viable option for the competent but un-accredited through university.
You've got a dead industry. Its dead, but it doesn't know it yet. Such is the problem with chaotic whipsaws and cascading failures that occur on a lag. By the time the problem is recognized, it will be too late to correct (because of hysteresis).
Such aggregate stupidity in collapsing the labor pool is why there is a silent collapse going on in the industry, and why so many people cannot find work.
The level of work that can be expected now in such places because of such ill will by industry is abyssal.
Given such fierce loss and arbitrarily enforced competition, who in their right mind would actually design resilient infrastructure properly; knowing it will chug away for years without issue after they lay you off with no intent towards maintenance (making money all that time).
A time is fast approaching where you won't find the people competent enough to know how to do the job right, at any price.
> Sad for folks inside tech not interested in/working on AI either
Not true. With this much money and more coming it means other roles will benefit too. The whole tech sector will grow - maybe less than AI specific, but still.
It's better than the alternative of no or negative growth.
Some comments assume the funds exist and will be spent elsewhere in the US or the markets they refer to but maybe not. If no AI, US funds could invest in Vietnam (that is receiving a FTSE market upgrade), China, EU or just about anywhere else.
Don't assume you'd benefit by wishing AI to begone. When you wanted crypto to go away you got AI.
They say the AI bubble is 17x larger than the dot com boom. I was in the bay area for that. It was pretty amazing. Every party someone announced they were pregnant (people were so optimistic they were all starting families). There were new restaurants popping up everywhere. Run down areas being revitalized. Suddenly expensive clothing brands that were just for die hard outdoors situations became streetwear.
I can't imagine how great/amazing it must be there now with AI being 17x that. Everyone where I live is financially stressed. Living in a community that isn't would be so nice. I am sure people in the bay area would miss this current feeling of prosperity/success/optimism to start a family if the bubble burst.
> I can't imagine how great/amazing it must be there now with AI being 17x that. Everyone where I live is financially stressed.
Maybe without the AI it'd be worse? COVID over-hiring lingered for a long time. A lot of the layoffs were due to that. Perhaps without AI and the over-supply for workers things could have crashed.
The flip side is that anyone not riding the AI wave gets swept away. Schoolteachers and bartenders, bus drivers and librarians. It becomes impossible for normal people to live in San Francisco. Shouldn’t we all be wishing for a normal stable diversified economy with modest growth, opportunity for young people and reasonable cost of living?
I don’t think hyper growth bubble vs economic depression are the only two options.
Not so sad. I'm working in an area that is AI adjacent. We don't use AI for anything and the tech we build is not only useful for AI companies. So we'll live when that busts and we don't directly contribute to the hype. But the AI folks are crazy about our tech which shoots our business through the roof as well. So we ride it while it lasts while not having to really feel like being part of it all. Once it busts, our margins will go back to normal and that's it.
I upvoted you but strictly speaking it is not true. "AI" is such a broad term. You probably meant GenAI like LLMs, and even here there are some genuinely useful applications.
But in general, there is a lot of extremely fascinating stuff, both to exploit and explore, for example in the context of traditional (non-transforeme-based) ML/DL methods. The methods are getting better year by year, and the hardware needed to do anything useful is getting cheaper.
So while it's true that after the initial fascination developers might not be that interested in GenAI, and some even deliberately decided not to use these tools at all in order to keep their skills fresh and avoid problems related with constant review fatigue, many tech folks are interested in AI in a wider context and are getting good results.
Not the parent commenter, but why would you assume that he meant LLMs specifically? I'm one of the "tech people not interested in AI" and I mean everything around AI/ML. I just like writing OG code man. I like architecture, algorithms, writing "feeling good" code. Like carpenters who just like to work with wood I like to work with code.
Yes, same feeling about ML really. Whether you are working with classic ML or LLMs, it's all about trial and error without predictable results, which just feels like sloppy (pun unintended) engineering by programmers' standards.
But this just doesn't correspond to reality. Most interesting algorithms in optimization etc. are metaheuristics as precise solutions are either proven to be impossible to get or we haven't found a solution yet. In the meantime, we get excellent results with "close-enough" solutions. Yes, the pedantic aspect of my soul may suffer, and we will always strive towards better and better solutions, but I guess we accepted already over a century ago that approximate solutions are extremely valuable.
I see my instructions for the LLM still as code. Just in human language and not a particular programming language. I still have to specify the algorithm, and I still have to be specific - the more fuzzy my instructions the more likely it is that I end up with having to correct the LLM afterwards.
There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.
I find it a bit ironic though that our tool out of the excessive complexity is an even more complex tool, although, looking at biology and that programming in large longer-running projects already felt like it had plenty of elements that reminded me of how evolution works in biology, already leading to hard or even impossible to comprehend systems (like https://news.ycombinator.com/item?id=18442637), the new direction is not that big of a surprise. We'll end up more like biology and medicine some day, with probabilistic methods and less direct knowledge and understanding of the ever more complex systems, and evolution of those systems based on "survival" (does what it is supposed to most of the time, we can work around the bugs, no way to debug in detail, survival of the fittest - what doesn't work is thrown away, what passes the tests is released).
Small systems that are truly "engineered" and thought through will remain valuable, but increasingly complex systems go the route shown by these new tools.
I see this development as part of a path towards being able to create and deal with ever more complex systems, not, or only partially, to replace what we have to create current ones. That AI (and what will develop out of it) can be used to create current systems too is a (for some, or many, nice) side effect, but I see the main benefit in the start of a new method to deal with ever more complexity.
I only ever see single-person or -team short-term experiences of LLM use for development. Obviously, since it is so new. But one important task of the tooling will only partially be to help that one person, or even team, to produce something that can be released. Much more important will be the long-term, like that decades-long software dev process they ended up with in my link above, with a lot of developers over time passing through still being able to extend it and fix issues years later. Right now it is solved in ways that are far from fun already, with many developers staying in those teams only long enough, or H1Bs who have little choice. If this could be done in a higher level way, with whatever "AI for software dev" will turn into over the next few decades, it could help immensely.
> There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.
I was wondering about this a lot. While it's a truism the generalities are always useful whereas the specific gets deprecated with time, I was trying to get down deeper on why certain specifics age quickly whereas other seem to last.
I came up with the following:
* A good design that allows extending or building on top of it (UNIX, Kubernetes, HTML)
* Not being owned by a single company, no matter how big (negative examples: Silverlight, Flash, Delphi)
* Doing one thing, and being excellent at it (HAproxy)
* Just being good at what needs to be done in a given epoch, gaining universality, building ecosystem, and just flowing with it (Apache, Python)
Most things in JS ecosystem are quite short-lived dead ends so if I were a frontend engineer I might consider some shortcuts with LLMs because what's the point of learning something that might not even exist a year from now? OTOH, it would be a bad architectural decision to use stuff that you can't be sure it will be supported in 5 years from now, so...
I predict the useful activity of writing LLM boilerplate will have a far shorter shelf-life than the activity of writing code has has.
I don't doubt that the current specific products and how you use them will endure. This is the very first type of something truly better, and there still is a very long way to go. Let's see what we will have twenty years from now, while the current products still find their customers as we can see.
No, I'm talking about core principles.
You just can't go on being incredibly specific. We already tried other approaches, "4th gen" languages were a thing in the 90s already, for example. I think the current kind of more statistical and NN approach is more promising. The completely deterministic computing is harder to scale, or you introduce problems such as seen in my example link over time, or it becomes non-deterministic and horrible to debug because the bigger the system you other things dominate more and more.
Again, this won't replace smaller software like we write today, this is for larger, ever longer lasting and more complex systems, approaching bio-complexity. There is just no way to debug something huge line by line, and benefits of modularization (and separation of the parts into components easier to handle) will be undermind4ed by long-term development following changing goals.
Just look at the difference in complexity of software form a mere forty, or twenty years ago and now. The majority of software was very young, and code size was measured in low mega-bytes. The systems explode in size, scale and complexity, and new stuff added over time is less likely to be added cleanly. Stuff will be "hacked on" somehow and released when it passes the tests well enough, just like in my example link which was for a 1990s DB system, and it will only get worse.
We need very different tools, trying to do this with our current source code and debugging methods already is a nightmare (again, see that link and the work description). We might be better off embracing more fuzzy statistical and NN methods. We can still write smaller components in more deterministic ways of today.
One must naturally make assumptions when responding to something that is poorly defined or communicated. That's just how it is. That's an issue for the original poster, not the responder.
The terminology of AI has a strong link with LLMs/GenAI. Quite reasonable.
As for code/architecture/infrastructure I like those things too. You do have to shape your communications to the audience you are talking to though. A lot of the products have eliminated the demand for such jobs, and its a false elimination so there will be an overcorrection later in a whipsaw, but by that time I'll have changed careers because the jobs weren't there. I'm an architect, with 10+ years of experience, not a single job offer in 2 years with tens of thousands of submissions in that time.
If there is no economic opportunity you have to go where the jobs are. When executives play stupid games based in monopoly to drive wages down, they win stupid prizes.
Sometime around 2 years is the max time-frame before you get brain drain for these specialized fields, and when that happens those people stop contributing to the overall public parts of the sector entirely. They take their expertise, and use it for themselves only, because that is the only value it can provide and there's no winning when the economy becomes delusional and divorced from reality.
You have AI eliminating demand for specialized labor that requires at least 5 years of experience to operate competently, AI flooding the communication space with jammed speech (for hiring through a mechanism similar to RNA interference), and you have professional certificate providers retiring all benefits, and long-lasting certificates that prove competency on top of the coordinated layoffs by big tech in the same time period. Eliminating the certificate path as a viable option for the competent but un-accredited through university.
You've got a dead industry. Its dead, but it doesn't know it yet. Such is the problem with chaotic whipsaws and cascading failures that occur on a lag. By the time the problem is recognized, it will be too late to correct (because of hysteresis).
Such aggregate stupidity in collapsing the labor pool is why there is a silent collapse going on in the industry, and why so many people cannot find work.
The level of work that can be expected now in such places because of such ill will by industry is abyssal.
Given such fierce loss and arbitrarily enforced competition, who in their right mind would actually design resilient infrastructure properly; knowing it will chug away for years without issue after they lay you off with no intent towards maintenance (making money all that time).
A time is fast approaching where you won't find the people competent enough to know how to do the job right, at any price.