> absent the AI boom we would probably have lower interest rates [and] electricity prices, thus some additional growth in other sectors.
In other words, the AI hype comes at the cost of lower growth rates in other sectors of the economy?
It makes sense, since investor money is spent exactly once. If it goes to AI then it doesn't go elsewhere. And if it didn't go to AI then it would go elsewhere.
Sad for folks outside tech. But at least they can AI generate cat pictures now, and watch their tech friend use AI tooling to write software.
Sad for folks inside tech not interested in/working on AI either
> Sad for folks inside tech not interested in/working on AI either
Not true. With this much money and more coming it means other roles will benefit too. The whole tech sector will grow - maybe less than AI specific, but still.
It's better than the alternative of no or negative growth.
Some comments assume the funds exist and will be spent elsewhere in the US or the markets they refer to but maybe not. If no AI, US funds could invest in Vietnam (that is receiving a FTSE market upgrade), China, EU or just about anywhere else.
Don't assume you'd benefit by wishing AI to begone. When you wanted crypto to go away you got AI.
They say the AI bubble is 17x larger than the dot com boom. I was in the bay area for that. It was pretty amazing. Every party someone announced they were pregnant (people were so optimistic they were all starting families). There were new restaurants popping up everywhere. Run down areas being revitalized. Suddenly expensive clothing brands that were just for die hard outdoors situations became streetwear.
I can't imagine how great/amazing it must be there now with AI being 17x that. Everyone where I live is financially stressed. Living in a community that isn't would be so nice. I am sure people in the bay area would miss this current feeling of prosperity/success/optimism to start a family if the bubble burst.
> I can't imagine how great/amazing it must be there now with AI being 17x that. Everyone where I live is financially stressed.
Maybe without the AI it'd be worse? COVID over-hiring lingered for a long time. A lot of the layoffs were due to that. Perhaps without AI and the over-supply for workers things could have crashed.
The flip side is that anyone not riding the AI wave gets swept away. Schoolteachers and bartenders, bus drivers and librarians. It becomes impossible for normal people to live in San Francisco. Shouldn’t we all be wishing for a normal stable diversified economy with modest growth, opportunity for young people and reasonable cost of living?
I don’t think hyper growth bubble vs economic depression are the only two options.
Not so sad. I'm working in an area that is AI adjacent. We don't use AI for anything and the tech we build is not only useful for AI companies. So we'll live when that busts and we don't directly contribute to the hype. But the AI folks are crazy about our tech which shoots our business through the roof as well. So we ride it while it lasts while not having to really feel like being part of it all. Once it busts, our margins will go back to normal and that's it.
I upvoted you but strictly speaking it is not true. "AI" is such a broad term. You probably meant GenAI like LLMs, and even here there are some genuinely useful applications.
But in general, there is a lot of extremely fascinating stuff, both to exploit and explore, for example in the context of traditional (non-transforeme-based) ML/DL methods. The methods are getting better year by year, and the hardware needed to do anything useful is getting cheaper.
So while it's true that after the initial fascination developers might not be that interested in GenAI, and some even deliberately decided not to use these tools at all in order to keep their skills fresh and avoid problems related with constant review fatigue, many tech folks are interested in AI in a wider context and are getting good results.
Not the parent commenter, but why would you assume that he meant LLMs specifically? I'm one of the "tech people not interested in AI" and I mean everything around AI/ML. I just like writing OG code man. I like architecture, algorithms, writing "feeling good" code. Like carpenters who just like to work with wood I like to work with code.
Yes, same feeling about ML really. Whether you are working with classic ML or LLMs, it's all about trial and error without predictable results, which just feels like sloppy (pun unintended) engineering by programmers' standards.
But this just doesn't correspond to reality. Most interesting algorithms in optimization etc. are metaheuristics as precise solutions are either proven to be impossible to get or we haven't found a solution yet. In the meantime, we get excellent results with "close-enough" solutions. Yes, the pedantic aspect of my soul may suffer, and we will always strive towards better and better solutions, but I guess we accepted already over a century ago that approximate solutions are extremely valuable.
I see my instructions for the LLM still as code. Just in human language and not a particular programming language. I still have to specify the algorithm, and I still have to be specific - the more fuzzy my instructions the more likely it is that I end up with having to correct the LLM afterwards.
There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.
I find it a bit ironic though that our tool out of the excessive complexity is an even more complex tool, although, looking at biology and that programming in large longer-running projects already felt like it had plenty of elements that reminded me of how evolution works in biology, already leading to hard or even impossible to comprehend systems (like https://news.ycombinator.com/item?id=18442637), the new direction is not that big of a surprise. We'll end up more like biology and medicine some day, with probabilistic methods and less direct knowledge and understanding of the ever more complex systems, and evolution of those systems based on "survival" (does what it is supposed to most of the time, we can work around the bugs, no way to debug in detail, survival of the fittest - what doesn't work is thrown away, what passes the tests is released).
Small systems that are truly "engineered" and thought through will remain valuable, but increasingly complex systems go the route shown by these new tools.
I see this development as part of a path towards being able to create and deal with ever more complex systems, not, or only partially, to replace what we have to create current ones. That AI (and what will develop out of it) can be used to create current systems too is a (for some, or many, nice) side effect, but I see the main benefit in the start of a new method to deal with ever more complexity.
I only ever see single-person or -team short-term experiences of LLM use for development. Obviously, since it is so new. But one important task of the tooling will only partially be to help that one person, or even team, to produce something that can be released. Much more important will be the long-term, like that decades-long software dev process they ended up with in my link above, with a lot of developers over time passing through still being able to extend it and fix issues years later. Right now it is solved in ways that are far from fun already, with many developers staying in those teams only long enough, or H1Bs who have little choice. If this could be done in a higher level way, with whatever "AI for software dev" will turn into over the next few decades, it could help immensely.
> There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.
I was wondering about this a lot. While it's a truism the generalities are always useful whereas the specific gets deprecated with time, I was trying to get down deeper on why certain specifics age quickly whereas other seem to last.
I came up with the following:
* A good design that allows extending or building on top of it (UNIX, Kubernetes, HTML)
* Not being owned by a single company, no matter how big (negative examples: Silverlight, Flash, Delphi)
* Doing one thing, and being excellent at it (HAproxy)
* Just being good at what needs to be done in a given epoch, gaining universality, building ecosystem, and just flowing with it (Apache, Python)
Most things in JS ecosystem are quite short-lived dead ends so if I were a frontend engineer I might consider some shortcuts with LLMs because what's the point of learning something that might not even exist a year from now? OTOH, it would be a bad architectural decision to use stuff that you can't be sure it will be supported in 5 years from now, so...
I predict the useful activity of writing LLM boilerplate will have a far shorter shelf-life than the activity of writing code has has.
I don't doubt that the current specific products and how you use them will endure. This is the very first type of something truly better, and there still is a very long way to go. Let's see what we will have twenty years from now, while the current products still find their customers as we can see.
No, I'm talking about core principles.
You just can't go on being incredibly specific. We already tried other approaches, "4th gen" languages were a thing in the 90s already, for example. I think the current kind of more statistical and NN approach is more promising. The completely deterministic computing is harder to scale, or you introduce problems such as seen in my example link over time, or it becomes non-deterministic and horrible to debug because the bigger the system you other things dominate more and more.
Again, this won't replace smaller software like we write today, this is for larger, ever longer lasting and more complex systems, approaching bio-complexity. There is just no way to debug something huge line by line, and benefits of modularization (and separation of the parts into components easier to handle) will be undermind4ed by long-term development following changing goals.
Just look at the difference in complexity of software form a mere forty, or twenty years ago and now. The majority of software was very young, and code size was measured in low mega-bytes. The systems explode in size, scale and complexity, and new stuff added over time is less likely to be added cleanly. Stuff will be "hacked on" somehow and released when it passes the tests well enough, just like in my example link which was for a 1990s DB system, and it will only get worse.
We need very different tools, trying to do this with our current source code and debugging methods already is a nightmare (again, see that link and the work description). We might be better off embracing more fuzzy statistical and NN methods. We can still write smaller components in more deterministic ways of today.
One must naturally make assumptions when responding to something that is poorly defined or communicated. That's just how it is. That's an issue for the original poster, not the responder.
The terminology of AI has a strong link with LLMs/GenAI. Quite reasonable.
As for code/architecture/infrastructure I like those things too. You do have to shape your communications to the audience you are talking to though. A lot of the products have eliminated the demand for such jobs, and its a false elimination so there will be an overcorrection later in a whipsaw, but by that time I'll have changed careers because the jobs weren't there. I'm an architect, with 10+ years of experience, not a single job offer in 2 years with tens of thousands of submissions in that time.
If there is no economic opportunity you have to go where the jobs are. When executives play stupid games based in monopoly to drive wages down, they win stupid prizes.
Sometime around 2 years is the max time-frame before you get brain drain for these specialized fields, and when that happens those people stop contributing to the overall public parts of the sector entirely. They take their expertise, and use it for themselves only, because that is the only value it can provide and there's no winning when the economy becomes delusional and divorced from reality.
You have AI eliminating demand for specialized labor that requires at least 5 years of experience to operate competently, AI flooding the communication space with jammed speech (for hiring through a mechanism similar to RNA interference), and you have professional certificate providers retiring all benefits, and long-lasting certificates that prove competency on top of the coordinated layoffs by big tech in the same time period. Eliminating the certificate path as a viable option for the competent but un-accredited through university.
You've got a dead industry. Its dead, but it doesn't know it yet. Such is the problem with chaotic whipsaws and cascading failures that occur on a lag. By the time the problem is recognized, it will be too late to correct (because of hysteresis).
Such aggregate stupidity in collapsing the labor pool is why there is a silent collapse going on in the industry, and why so many people cannot find work.
The level of work that can be expected now in such places because of such ill will by industry is abyssal.
Given such fierce loss and arbitrarily enforced competition, who in their right mind would actually design resilient infrastructure properly; knowing it will chug away for years without issue after they lay you off with no intent towards maintenance (making money all that time).
A time is fast approaching where you won't find the people competent enough to know how to do the job right, at any price.
That’s the most unforgivable part of this. How this is starving and destroying the rest of the economy
Opportunity costs don't destroy the economy.
Money-printing does destroy the economy on a lag, specifically when the production has such catastrophic shortfall that it shows itself to be ponzi without tangible value or benefit. Value being based entirely in human action.
When that happens, its basically slave labor silently extracted from the population through inflation. Such things historically always trigger other cascading failures.
I see you've got the same treatment I often do. I tried to help, even with science on our side, it's not much that I can do, but it's from the heart:) Cheers!
Unfortunately that is just how it is on biased platforms, but its not like that everywhere yet. It is like a page out of Atlas Shrugged though.
All the parasites ended up dying in that book when all the intelligent people decided to just step back and let natural human tendency and the momentum they created do what it was always going to do. All those people were deluded into thinking they could just make a law without paying respect to the mechanics that made things work. Ayn Rand though is also quite deluded in that her ideas don't work without eliminating inheritance and money-printing.
Slavery is intolerable in any form.
> when the production has such catastrophic shortfall that it shows itself to be ponzi without tangible value or benefit.
Indeed, the scale matters a lot. With that said, a ponzi always has beneficiaries.
> Such things historically always trigger other cascading failures.
With the help of war or without, it's up to the beneficiaries.
> It makes sense, since investor money is spent exactly once. If it goes to AI then it doesn't go elsewhere. And if it didn't go to AI then it would go elsewhere.
This assumes only "the US" exists in this world. The AI hype would have been a thing regardless.
If the money doesn't go to the US it'd go to China or somewhere else. Just like with batteries you'd just lose the market if you don't invest.
> since investor money is spent exactly once
So I'm nitpicking here, but this seems to me to be an important nitpick: This is not true because money circulates.
The distinction is, one should not stop looking at only the first level effects, but the entire fields the money streams flow through.
It remains true that money flows in specific areas, but it is on a higher level than only the immediate first level spending, so the analysis has to be different too.
Like, for example, NVIDIA investing billions back into OpenAI so that they can buy more of NVIDIA’s hardware
That money will still flow elsewhere.
Your house is in flames? Don't worry that energy will flow elsewhere
Money circulates but resources do not. A human hour spent constructing a data center can’t then be used to build an apartment building.
Yes, and??? How is that relevant? We were talking about money, explicitly. Not the resources. When the economy still has room, i.e. people are available to do more, then any missing resources can be obtained by sooner or later by employing people.
This is not about the body or the land, but about the blood or the water flowing through them.
> Yes, and??? How is that relevant
Because it describes exactly the point which GGP tried to make (source: that was me). The assumption is that AI growth is great because without it, look at how low the non-AI growth is! But that argument is flawed because the resources (manpower, materials, manufacturing, energy, ...) absent the AI hypr would not vanish but be used for something else, so the growth in those other areas would be bigger. Granted, perhaps not as big (marginal gains and all that), but the painted picture is still skewed.
I even quoted it! I respond3ed to what I quoted exactly. Again:
> since investor money is spent exactly once
In addition, I even pointed out that I was not posting about the main argument!
Quoting myself, again:
> So I'm nitpicking here
That only works in the long term if the investments pay off, generating enough returns to then create a new generation of entrepreneurs and investors who simulate the economy further. If they don’t, that money kind of vanishes - it pays for salaries partially but those aren’t generally enough to stimulate meaningful amounts of angel investors. It also buys capex equipment that depreciates in value (and presents a fixed value amount of sales for the manufacturers of said equipment, not reliably repeating sales over a long time period ).
> That only works in the long term
Eh... No?
The money flows on pretty quickly, unless they keep it as cash under their mattress.
Are you confusing it with any possible effects or work performed?
> generating enough returns
Ah I see the confusion.
No, they don't have to wait for "returns". We are talking about THAT money, the exact investor money they got. Which they will spend again. Even if they just keep it at the bank, it will be available to that bank to do something with.
The fate of that business does not matter, all that matters is that the investor money they took is going to continue to flow, outwards from them to whomever the company pays with that money. And so does everybody else.
The flow only stops when the money is lying around somewhere, and since that's the bank then at least the bank can do something with it. The flows truly stop when the whole economy is going down, when everybody cuts back on spending, and investments dry up too, so that the money is truly just lying around and nobody wants it.
You're reasoning would seem to imply that because the initial investment cash always ends up circulating, no value is ever destroyed. But that's obviously not how things work because value can be destroyed. Specifically, the way it works is that for example $1B is used to buy a part of the company A. $1B is then put to use and starts circulating (but also not entirely - it does indeed just largely sit in a bank account because it's hard to spend $1B that quickly).
Let's say this investment then raises the market cap of the company that was invested in by $5-10B. Loans are then taken out against that $5-10B of increased market cap. If the growth never materializes, then the investment ends up underwater and there's secondary effects that make the loan worth less than what was given; this is what the credit worthiness is supposed to measure, but with hyperinflated values and lots of money as collateral these loans are given great rates. Basically what ends up happening is extra virtual currency starts circulating mirroring the increased market cap due to that initial $1B investment. But if the return on investment doesn't materialize, this $5-10B of currency just vanishes into thin air and dwarfs the original $1B investment. Additionally, there's leveraged secondary and tertiary bets that get taken out that further magnify this circulating currency and magnifies the loss if things don't work out.
This is precisely what happened in the dot com and banking crises bubbles. These things have secondary parasitic effects that are ballooned through leveraged investments into affecting the broader worldwide economy and crossing industries and whatnot.
> You're reasoning would seem to imply
Let's stop right there.
First, this already shows You are responding to something in your mind, not mine. What I wrote is plain to see. Second, the rest of that sentence confirms that fear. You are not arguing with me and what I wrote, but with some ghost in your own head.
> seem to imply that because the initial investment cash always ends up circulating, no value is ever destroyed
I did not say that * at all*. And whatever you yourself interpret into plain statements is... yourself speaking.
If I invest in lighting money on fire, there is no money to circulate.
Paying my staff to light money on fire means that some of the money will circulate, and my brilliant idea to decarbonize the economy by replacing coal with printed currency will result in some benefits (much much less than the costs), but fundamentally it is not a productive endeavor.
The AI Data center build out is much more useful than purely lighting money on fire, but if we overpay for more than it is actually ultimately worth, than it still was a bad idea.
Is it good for folks in tech? Most of the money is spent on energy and silicon.
It’s good for a tiny subsection of people in tech, for most people this is very destructive
That's right.
By the same argument, I don't think it's right to say without AI, GDP growth would be flat. That cash would likely go into other investments.
The question is if AI will make a return on the investment or not.
If you look at the last 10 years you have to admit that the statement wouldn't be true. No tech company spent like they do now they rather opted for stock buybacks.
This is just a very expensive buyback. At the heart this is all Nvidia doing round about the same mechanism of pushing the share price up. This started out as a useful idea, and it’s mutated into something that is destroying the economy
Except NVIDIA is turning the roundabout investments into real silicon, data centers and extraneous software investments (see: IsaacSim, autonomy stack etc). They may not have the returns all the investors are expecting but they are an extreme net good for the ecosystem.
I think the comparison to stock buybacks is ludicrous.
> they are an extreme net good for the ecosystem.
Evidence?
Perhaps depends on what you mean with "ecosystem". Within the AI tech/hype, sure, there it's good. But for the economy as a whole? Is it that good? There are probably some benefits, but do they match the current valuations?
Most likely they don't, because hype cycles inherently overvaluate things for a while because they don't know what will stick. If things were not dramatically overvalued right now then investors would not be acting in their best interest.
Their investments in robotics over the long term will lead to massive unlocks in productivity that hasn’t moved in some industries for decades eg sim2real tech leading to construction robots being used to build housing with 1/3 of the human labor needed now in an industry with widespread labor shortage [1]
Will it match a certain valuation within a certain time period? I guess I don’t really care, I’m not an investor.
[1] https://www.mckinsey.com/capabilities/operations/our-insight...
Buybacks are essentially dividends. The cash went to investors who did something with it.
Did they park it in bonds over the past 10 years? I doubt it. Interest rates were ~0, VC funding was crazy, the money taps were open. They would have been less open without these buybacks.
It's not necessarily 1:1, it seems people are more willing to spend the cash on AI than they were on other things. But it's not 1:0 either.
If the tech companies spent the money on stock buybacks, it would not have disappeared. It would have been reinvested elsewhere.
Or they would be parked in things such as gold, bonds, etc.
That requires buying those investments, which means the person who sold them has to invest that money somewhere. It still does not disappear.
I love the way I keep getting downvoted on HN whenever I say anything about a subject about which I know a lot more about than the average person here (usually investment and finance).
In my experience, it’s usually tone more than content.
In your comment I’m replying to, the first paragraph contributes meaningfully to the discussion; the second sounds a bit like lashing out, which might be why people react negatively.
You provided a perfect example for why:
> a subject about which I know a lot more about than the average person here
True or not, expressing it like that is just arrogant.
>I love the way I keep getting downvoted on HN whenever I say anything about a subject about which I know a lot more about than the average person here (usually investment and finance).
That's the inherent nature of these voting based online platforms. They reward what the user base wants to hear over what is correct. This is especially apparent in matters with inherent nuance and uncertainty.
Or, writing like a know-it-all arse isn't rewarded here. Being genuine and curious about things is rewarded more than if you're an obnoxious ass about how right you are.
> That requires buying those investments, which means the person who sold them has to invest that money somewhere. It still does not disappear.
I also didn't say that the money disappear, I said the money may just end up getting parked in the modern day equivalent of dragon hoards. There's plenty of things to park idle money in hopes of returns.
I was just pointing out that the idea thaf it would just become investment in other parts of the economy is naive.
> I love the way I keep getting downvoted
I hadn't downvoted you, but I will do so now. I always downvote people that are butthurt about internet points.
> I also didn't say that the money disappear, I said the money may just end up getting parked in the modern day equivalent of dragon hoards. There's plenty of things to park idle money in hopes of returns.
You miss the point. What does the person who sold the assets in which the money is "parked" do with it? If they buy a bonds what does the seller of the bonds do with the money? Leave it in a bank account? The bank will lend the money to someone who will either spend the money (stimulating the economy) or reinvest it. They might buy another asset. If that asset is newly issued shares or bonds, the money will then go to a company planning to reinvest it. Anything else and it just pushes it another step to another person.
Eventually it goes back into the economy.
> I was just pointing out that the idea thaf it would just become investment in other parts of the economy is naive.
The naive assumption is that "parked" money somehow leaves the economy. its "parked" from the point of view of the person making the investment, but it has to go somewhere.
> I hadn't downvoted you, but I will do so now. I always downvote people that are butthurt about internet points.
How mature and charmingly expressed!
My point is that there is a lot of Dunning–Kruger in HN discussions of economics and finance.
> You miss the point. What does the person who sold the assets in which the money is "parked" do with it? If they buy a bonds what does the seller of the bonds do with the money? Leave it in a bank account? The bank will lend the money to someone who will either spend the money (stimulating the economy) or reinvest it. They might buy another asset. If that asset is newly issued shares or bonds, the money will then go to a company planning to reinvest it. Anything else and it just pushes it another step to another person.
I miss no point. I understand quite well that "parked money" still exusts. What you ignore is that value is sometimes "destroyed". Investiments that underperform or go in the red, loans that default, crashes in real estate, etc. if money is invested in stocks, and the stocks value go in freefall, the nominal amount of money that existed previously in the economy is the same, and everyone is still poorer because of it.
The massive AI hype is massively pumping a bull run in a very small sector of the economy (if this is a bubble is not something I can answer). A lot of money is moving around around a small subset of companies pumping revenues of one another in a circular fashion, which increases the value of those stocks (thus creating economic growth, real or otherwise). Without this mechanism, this value wouldn't have been created. It's anyone's guess how things would perform without it.
During a crash, the same amount of money that existed prior to the crash is still there. The crash still happens and the country can still go into recession.
> How mature and charmingly expressed!
Thank you. I, too, think I am mature and charming.
You are shifting what you talked about. The comment I replied to was about tech companies putting money into stock buy backs instead of AI. You are no talking about AI being a bubble.
You also failed to understand that money put into an investment has to go somewhere pretty much immediately. If someone defaults on a loan they must have used the money, so someone else has it, so it is still not destroyed.
If this was true universally then the total amount monetary value in the economy was constant, apart from the feds printing some. But it's not. Value comes and goes. You can definitely lose money without it going elsewhere.
> You are no talking about AI being a bubble.
I explicitly did not talk about AI being a bubble.
You may understand of economics (at least you say so). But reading comprehension is not your forte.
See Dutch Disease:
https://en.wikipedia.org/wiki/Dutch_disease
> Sad for folks outside tech.
I'd think investment banks, law firms and management consultancies must be doing very well in this inflated market. They get a piece of every financing deal and consulting engagement that drives these bilkion dollar spending decisions.
Some of that “AI-driven growth” might just be the economy treading water against the tariff headwinds.
I see this take a lot, but it confuses me. There is no guarantee that LP’s would take that money and instead invest it in <tech I respect that is not AI>.
They would just as likely hoover up housing around the country or some such insanity to capitalize on the scarcity.
VC is actually a pretty effective vehicle to separate rich people from their money so society can try crazy things. You just don’t agree with this particular adventure and frankly there will never be the perfect alternative adventure.
Isn't the core issue that this deluge of spending on the hype inflates away adjacent companies valuation (Meta, Google, Tesla, Nvidia, etc.) where people's pensions and savings get directed towards since they become the growth stocks in the index, and when it inevitably corrects there's second/third-order effects on non-rich people?
> They would just as likely hoover up housing around the country or some such insanity to capitalize on the scarcity.
Another core issue in a hyper-financialised economy, the money doesn't get invested in what would be best for society, it keeps chasing either risky endeavours or parked in presumed safe assets (such as housing), inflating away asset classes. Where are the incentives to invest in foundational areas which do compound to make a society have resilient growth, like infrastructure: energy, transportation, etc.? It feels like without government direction to spend in big projects there's simply no appetite from the private, hyper-financialised, system to do the work, unless there's potential to get 10-100x returns. Is that good for society at large?
If hyper-financialisation is not helping the overall economy, and society to become better, why the hell should we still (in the Western world) pursue that? If all it can do is increasingly chase the extremes: hyper-growth vs extremely safe assets, is it any good anymore?
I think if you want the public to accept that the government will be a better shepherd of this money than a decentralized smattering of individuals then the government should provide evidence for this, once it actually opens up from a dysfunctional shutdown.
At least in the American context everything from California High Speed rail to bloated defense spending has shown that VC’s are much better shepherds of their own money.
Completely agree, the American government has become incompetent in delivering any real big project to its citizenry, it went through a whole ideological process of gutting its abilities to do so.
It was designed to lose this ability, and to lean onto private enterprises to do anything but in the past the government was able to rollout highways, go to the moon, build dams, bridges, power plants.
If both the government and VCs are now unreliable to shepherd capital to direct it to the improvement of society at large you might need to rethink the whole system, and work to nudge it into a better path.
Most people wouldn't vote or participate in this investment craze. Maybe we shouldn't let unelected lucky nobodies decide how we invest our time for us.
I’m not saying don’t tax them or let them influence politics.
I’m saying letting them go to space or turning sand into intelligence is infinitely better than buying land and charging us rent (what most rich people have done in history).
They are hoping to invest in the companies that will be the (next) electronic equivalent of rent-seeking land owners, as faang are now. It's better since land is physically necessary to live, but only marginally so.
Happy to wield the hammer of Lina Khan to stop the monopolizing and rent seeking.
Buying more land, building more housing to ultimately charge rent would definitely be good for everybody. The housing crisis is a supply side shortage and every dollar going into building new rentals would alleviate that pressure. Instead, the money goes to building more AI datacenters so that the family of 5 with mom and dad working at Walmart can have higher definition cat videos. They'd be better served with more supply in the rental market, ultimately lowering rent.
If the AI is indeed a bubble and burst not so long after this... We might even have a Warhammer 40k style anti AI movement
I mean, most of my friends (especially the artists but also software devs) seem to hate AI with a passion: sometimes because of the ethical bankruptcy, other times because of the amount of slop it produces and how in your face it is due to the hype cycle, other times due to a belief that it more or less leads to brainrot and atrophy of cognitive abilities.
> how in your face it is due to the hype cycle,
I'm really struggling with this one. I think AI (generative and not) is surely fascinating. I should by rights be all up in it. I could definitely get it, I don't think I'm stupid in terms of technology. Regardless of the damage the laser-focus on one thing might (or might not) be doing to rest of industry (and the effect on society, which to be honest, I am conflicted on if we can blame the technology). And yet so much of it is all so...tedious and fake somehow, and just even keeping up with headlines is exhausting let along engaging with every LinkedIn "next huge thing that if you don't do you should find a bridge to live under soon".
It's like that guy who tells you constantly how rich and cool he is. Bro, if you're that cool, let your cool speak for itself. But I'm not sure I want to lend you a grand for your new car.
It's all very much the crypto bubble all over again, at this point. Same hype, same "get in now before you're left behind" (this is almost a sure signal that something is an unsustainable bubble; sustainable growth doesn't require this type of scaremongering recruitment), same level of completely unrealistic promises, same grifters (in some cases, literally the same people).
I'm a big AI supporter.
I'm just waiting for the slop to be so metastasized that our terminally ill "social networks" finally die, alongside with it.
Of course i'll be proven terribly wrong. But hey. hope.
Fingers crossed.
Hating a tool? And software developers with emotions ? (I get it for the artists :-p)
In my opinion AI makes visible more structural issues that were always there, but we could ignore. People addicted to various stuff (being substances or social networks or watching sports), social communities disappearing (no more going to the pub, stay at home with your TV), growing inequality (because capital is not taxed as labor), strange beliefs (all the conspiracy theories, which existed before) and others.
Find a use for the new tool to improve the situation if you can, but I think that hating tools can lead you on dark paths.
The slop is real. Especially when I see promoters of platforms for vibe coders. They don't understand the implications of lack of security in potentially viral apps. It's easy to consider them as WMDs.
People have the same password across services. They share personal information. In a geopolitical climate as today's, where the currency of war is disruption, it can wreak havoc.
[dead]
You really can’t see that in the entire economy (particularly in such a large country) there isn’t anything else worth investing in and that basically everyone outside the small groups of people involved by in AI should what, just be starved out of the economy?
But that’s not how investing works? It’s money looking for returns not social good (that’s what taxes are for).
In that way AI is 1000% better than crypto or real estate speculation.
How would you differentiate between real estate investment versus speculation? Money goes towards building new housing, say, or buying some houses in a neighborhood to renovate, or a new apartment building, is that speculation or investment or both?
How is new housing supply to arrive?
I don’t think speculation was the right word. Scarcity exploitation (something current investment firms love doing) is better and building new housing is not that.
Investing is choosing how to put your money to work just like how you choose what vendor to give your business.
Here's a though experiment. If you could invest today in a company that will result in the destruction of your town (say a mining company) but you got 1% higher return compared to others, you're saying that's a perfect investment and would do it right away?
And if the answer is yes to the above, you can make that a lot darker if you want. See how far your belief goes.
> If you could invest today in a company that will result in the destruction of your town (say a mining company) but you got 1% higher return compared to others, you're saying that's a perfect investment and would do it right away?
If you look at how the money people behaved since always, that's exactly what would happen.
Economics works on larger scale than region or sector. I am sure that people that were copying books in the medieval ages would not have invested in the printing press, but someone else did and it was still good for society even if not for that specific group.
Society works by balancing the interests of various groups, and there will be people with different opinions than yours, including some you don't like.
That’s what regulations are for. If AI was known to have the destructive capabilities of strip mining, with real evidence of harm then it should be regulated and would have much lower returns regardless of your morals.
This isn’t an opportunity if Open AI is signing deals in a single year 2025 where it commits to pay a bigger number than the entire defense budget despite being deeply unprofitable. It’s a scam. The bubble makes it seem like it’s a good investment. And yes I do believe there are better investments outside the AI bubble.
“I do believe there are better investments outside the AI bubble.”
Then make them! I don’t have the confidence you have in this “scam”. For sure the valuations are inflated but I don’t think this infrastructure investment will be a waste.
[dead]
More like compared to the invention of nuclear weapons or advertising
[dead]
Solar panels for example...
Not for the next 4 years, drill baby, drill
[dead]
Yes, without [popular activity] there would be more [resource activity uses] at a lower price.
At some point you have to grant people agency and accept that things spend money and time on things that are valuable to them.
> you have to grant people agency and accept that things spend money and time on things that are valuable to them
We're talking about companies here, not people. And yet, it is kinda true—companies spend time and money on things that are valuable to the people at high level positions in the company and board. But that isn't the same as companies spending time and money on things that are valuable to the company.
ChatGPT currently has ~ 120 to 190 million daily active users and ~ 800 million weekly active users. It's the fastest growing product in history, blowing others out of the water. I think investment is warranted
Hey if you have a few trillion dollars to invest, my "Free dollar per user daily giveaway" app will be even faster growing. ChatGPT is great, but giving things away is ultimately philanthropy, and the OpenAI investors expect returns, not a tax write-off.
Your just looking at pool walking down a concentration gradient. When everyone has access, and such access eliminated more jobs than created, do you think the investment was warranted. Value is only ever based in human action; when the basis for the reasoning is shown to be false or removed it becomes something similar to tulip mania.