> Video games stand out as one market where consumers have pushed back effectively

No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.

If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:

> content such as artwork, sound, narrative, localization, etc.

No 'code' or 'programming.'

If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.

> This stands in stark contrast to code, which generally doesn't suffer from re-use at all, or may even benefit from it, if it's infrastructure.

Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing.

> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.

Spore is well acclaimed. Minecraft is literally the most sold game ever. The fact one developer fumbled it doesn't make the idea of procedural generation bad. This is a perfect example of that a tool isn't inherently good or bad. It's up to the tool's wielder.

> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.

Yes, this is a wildly uneducated perspective.

Procedural generation has often been a key component of some incredibly successful, and even iconic games going back decades. Elite is a canonical example here, with its galaxies being procedurally generated. Powermonger, from Bulldog, likewise used fractal generation for its maps.

More recently, the prevalence of procedurally generated rogue-likes and Metroidvanias is another point against. Granted, people have got a bit bored of these now, but that's because there were so many of them, not because they were unsuccessful or "failed to deliver".

Procedural generation underlies the most popular game of all time (Minecraft) and is foundational for numerous other games of a similar type - Dwarf Fortress, et al.

And it's used to power effect where you might not expect it (Stardew Valley mines).

What procedural generation does NOT work at is generating "story elements" though perhaps even that can fall, Dwarf Fortress already does decently enough given that the player will fill in the blanks.

> And it's used to power effect where you might not expect it (Stardew Valley mines).

Apparently Stardew Valley's mines are not procedurally generated, but rather hand-crafted. Per their recent 10 year anniversary video, the developer did try to implement procedural generation for the mines, but ended up scrapping it:

https://www.stardewvalley.net/stardew-valley-10-year-anniver...

They're quasi-generated with random elements and fixed elements - similarly to early Diablo procedural generation.

That’s not the same procedural generation as GPT or diffusion and you know it.

It’s not even in the same ballpark as Elite, NMS, terraria, or Minecraft.

The levels are all hand drawn, not generated by an algorithm, even if they’re shuffled. Eric Barone, the developer, has publicly said as much. Are you calling him a liar?

It’s like the difference between sudoku/crossword and conways game of life

And here I thought the most popular game of all time was Soccer or Super Mario Bros 3

Roguelike/lites are is of the most popular genres of indie games nowadays. One of it's main characteristics is randomization and procedural generation.

While there are many Roguelikes with procedural generation, I think the most popular ones do not. Slay the Spire, Risk of Rain 2, Hades 1/2, BoE etc are all handmade stages with a random order with randomized player powers rather than procedurally generated.

I’m a hard core rogue-like player (easily over a thousand hours at least in all the games I’ve played) but even so I can admit that hey have nothing compared to a well crafted world like you’d find in From Software titles or Expedition 33, or classic Zelda games for that matter. Making a great world is an incredibly hard task though and few studios have the capabilities to do so.

[dead]

Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games? Wouldn’t a better example be No Man’s Sky, if we’re talking procedural gen and eventually a good game.

In any case, I agree that gamers by and large don’t care to what extent the game creation was automated. They are happy to use automated enemies, automated allies, automated armies and pre-made cut scenes. Why would they stop short at automated code gen? I genuinely think 90% wouldn’t mind if humans are still in the loop but the product overall is better.

> Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games?

Yes. It is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you are unfamiliar with large parts of that field.

Large? That's your opinion

If you haven't heard of the modern roguelike genre you've probably been living under a rock, it seems like every other game these days at least calls itself such. Usually the resemblance to Rogue is so remote that it strains the meaning of the term, but procedural generation of levels is almost universal in this loosely defined genre.

Elite is a bit more obscure, but really anybody who aims to be familiar with the history of games should recognize the name at least. Metroidvania isn't a game, but is a combination of the names of Metroid and Castlevania and you absolutely should know about both of those.

Powermonger is new to me.

And while the comment in question didn't mention it, others have: Minecraft. If you're not familiar with Minecraft you must be Rip Van Winkle. This should be the foremost game that comes to mind when anybody talks about procedural generation.

Of course it is.

Then it is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you cannot substantiate your claims.

[deleted]

No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.

This reminded me of a conversation about AI I had with an artist last year. She was furious and cursing and saying how awful it is for stealing from artists, but then admitted she uses it for writing descriptions and marketing posts to sell her art.

Sinix even explicitly says that AI is an IP theft machine but it's okay to use AI to generate 360 rotation video to market your 2D works[0].

To summarize this era we live in: my AI usage is justified but all the other people are generating slop.

[0]: https://www.youtube.com/watch?v=z8fFM6kjZUk

[1]: Disclaimer: I deeply respect Sinix as an art educator. If it weren't him I wouldn't have learnt digital painting. But it's still quite a weird take of him.

> Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times.

Before LLMs we did already have a way to "save developers time from writing the same thing that has been done by other developers for a thousand times", you know? A LLM doing the same thing the 1001st time is not code reuse. Code reuse is code reuse.

Because code reuse is hard. Like, really hard. If it weren't we wouldn't be laughing at left-pad. If it weren't hard we wouldn't have so many front-end JavaScript frameworks. If it weren't Unreal wouldn't still have their own GC and std-like implementation today. Java wouldn't have been reinventing build system every five years.

The whole history of programming tool is exploring how to properly reuse code: are functions or objects the fundamental unit of reuse? is diamond inheritance okay? should a language have an official package management? build system? should C++ std have network support? how about gui support? should editors implement their own parsers or rely on language server? And none of these questions has a clear answer after thousands if not millions of smart people attempted. (well perhaps except the function vs object one)

Electron is the ultimate effort of code reuse: we reuse the tens of thousands of human-years invested to make a markup-based render engine that covers 99% of use case. And everyone complains about it, the author of OP article included.

LLM-coding is not code reuse. It's more like throwing hands up and admitting humans are yet not smart enough to properly reuse code except for some well-defined low level cases like compiling C into different ISA. And I'm all for that.

I think you could also argue that LLMs in coding are actually just a novel approach at code reuse: At the microscopic level, they excel at replicating known patterns in a new context.

(Many small dependencies can be avoided by letting the LLM just re-implememt the desired behavior; ~ with tradeoffs, of course)

The issue is orchestrating this local reuse into a coherent global codebase.

The problems with leftpad are a problem with the NPM ecosystem, not with code reuse as such. There are other dependency ecosystems that don't have these problems.

>"well perhaps except the function vs object one"

If this is what I think it is, I consider it very lopsided view, failure to recognize what model fits for what case and looking at everything from a hammer point of view

I think function is the fundamental unit and object is an extra level over it (it doesn't mean there is no use for object). Thinking objects/classes are the fundamental/minimal level is straight up wrong.

Of course it's just my opinion.

I have terrible news: LLMs don't actually make it easier, though it feels like they do at first

Hard agree. Before LLMs, if there was some bit of code needed across the industry, somebody would put the effort into writing a library and we'd all benefit. Now, instead of standardizing and working together we get a million slightly different incompatible piles of stochastic slop.

This was happening before llms in webdev

I don't think we should use webdev as an example of why lossy copy and paste works for the industry.

Before LLMs companies and people were forced to use one-size-fits-all solutions and now they can build custom, bespoke software that fits their needs.

See how it's a matter of what you're looking at?

Oh come on, you don't have to be condescending about function calls.

https://news.ycombinator.com/item?id=47260385

I was talking about libraries, higher-level units of reuse than individual functions. And your "syntactic" vs "semantic" reuse makes zero sense. Functions are literally written and invoked for their semantics – what they make happen. "Syntactic reuse" would be macros if anything, and indeed macros are very good at reducing boilerplate.

You might have a more compelling argument if instead of syntax and semantics you contrasted semantics and pragmatics.

A library is a collection of data structures functions. My argument still holds.

> Syntactic reuse would be macros

Well sure. My point is that what can be reused is decided ahead of time and encoded in the syntax. Whereas with LLMs it is not, and is encoded in the semantics.

> Pragmatics

Didn't know what that is. Consider my post updated with the better terms.

I’m not sure your logic is sound. It sounds like you are insisting on some nuance which simply isn’t there. LLM generates unmaintainable slop, which is extremely difficult to reason about, uses wrong abstractions, violates DRY, violates cohesion, etc.

The industry has known how to reuse codes for two decades now (npm was released 16 years ago; pip 18 years ago). Using LLMs for code reuse is a step in the wrong direction, at least if you care about maintaining your code.

> LLM generates unmaintainable slop

LLMs generate what you tell them to, which means it will be slop if you're careless and good if you're careful, just like programming in general.

You're cherry picking. The open world games aren't as compelling anymore since the novelty is wearing off. I can cherry pick, too. For example, Starfield in all its grandeur is pretty boring.

And the users may not care about code directly, but they definitely do indirectly. The less optimized and more off-the-shelf solutions have seen a stark decrease in performance but allowing game development to be more approachable.

LLMs saving engineers and developers time is an unfounded claim because immediate results does not mean net positive. Actually, I'd argue that any software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.

Startfield is boring because of the bad writing and they made a space exploration game where there are loading screens between the planet and space and you don’t actually explore space.

They fundamentally misunderstood what they were promising, it’s the same as making a pirate game where you never steer the ship or drop anchor.

You can prove people are not bored with the concept as new gamers still start playing fallout new Vegas or skyrim today despite them being old and janky.

> Starfield in all its grandeur is pretty boring.

And yet "No Mans Sky" is massively popular.

> ny software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.

And any software engineer worth their salt realizes there are 100s if not 1000s of problems to be solved and trying to paint a broad picture of development is naive. You have only seen 1% (at best) of the current software development field and yet you're confidently saying that a tool that is being used by a large part of it isn't actually useful. You'd have to have a massive ego to be able to categorically tell thousands of other people that what they're doing is both wrong and not useful and that they things they are seeing aren't actually true.

Also "AI" has been in gaming, especially mobile gaming, for a literal decade already.

Household name game studios have had custom AI art asset tooling for a long time that can create art quickly, using their specific style.

AI is a tool and as Steve Jobs said, you can hold it wrong. It's like plastic surgery, you only notice the bad ones and object to them. An expert might detect the better jobs, but the regular folk don't know and for the most part don't care unless someone else tells them to care.

And then they go around blaming EVERYTHING as AI.

Another example is upscaled texture mods, which has been a trend for a long while before 'large language' took off as a trend. Mods to improve textures in a game are definitely not new and that probably means including from other sources, but the ability to automate/industrialize that (and presumably a lot of training material available) meant there was a big wave of that mod category a few years back. My impression is that gamers will overlook a lot so long as it's 'free' or at least are very anti-business (even if the industry they enjoy relies upon it), the moment money is involved they suddenly care a lot about the whole fabric being hand made and need verification that everyone involved was handsomely rewarded.

This should be completely crushed by Nano Banana models?

The issue isn't objective quality or realism, it's sticking to a specific style consistently.

_Everyone_ (and their grandmother) can instantly tell a ChatGPT generated image, it has a very distinct style - and in my experience no amount of prompting will make it go away. Same for Grok and to a smaller degree Google's stuff.

What the industry needs (and uses) is something they can feed a, say, wall texture into and the AI workflow will produce a summer, winter and fall variant of that - in the exact style the specific game is using.

I think txt2img and img2img are terms to find those uses.

And comfyUI workflows. People have been doing this for awhile now.

If we're talking about texture upscaling alone (I suppose that's what the parent comment means), Nano Banana is a huge overkill.

"I hate CGI video"

"So you hated the TV Series Ugly Betty then?"

"What? that's not CGI!"

This video is 15 years old

https://www.youtube.com/watch?v=rDjorAhcnbY

I think that's a different category, though. Those backgrounds are actual video recordings of real places, not 3D environments modeled from scratch. It looks 'real' because the background actually exists.

It's still 100% CGI compositing and definitely not all of them are real places or real objects.

In that specific 15 year old example they're mostly composited, you're right about that.

I love Ian Hubert's demos of green screening in Blender.

https://www.youtube.com/watch?v=RxD6H3ri8RI

His Blender Conference talk about photogrammetry / camera projection / projection mapping was fantastic:

World Building in Blender - Ian Hubert

https://www.youtube.com/watch?v=whPWKecazgM

Computer Generated Imagery.

Your case would have been better if you had used Mad Max: Fury Road, or even Titanic as examples, rather then a mediocre TV show nobody remembers. Ugly Betty used green screens to make production cheaper, that did not improve the show (although it may have improved the profit margins). Mad Max: Fury Road on the other hand used CGI to significantly improve the visual experience. The added CGI probably increased the cost of the production, and subsequently it is one of the greatest, most awesome, movie ever made.

Actually if you look at the scene from Greys Anatomy [0:54] you can see where CGI is used to improve the scene (rather then cut costs), and you get this amazing scene of the Washington State Ferry crash.

I think you can see the parallels here. When people say they hate AI they are generally referring to the sloppy stuff it generates. It has enabled a proliferation of cheap slop. And with few exception it seems like generating cheap slop is all it does (these exception being specialized tools e.g. in image processing software).

> mediocre TV show

Won 3 Primetime Emmys

52 wins & 124 nominations total

https://www.imdb.com/title/tt0805669/awards/

I guess it's just too lowbrow for you.

If you read the next couple of paragraphs, the author addresses this:

> That said, Steam's policy has been recently updated to exclude dev tools used for "efficiency gains", but which are not used to generate content presented to players.

I only quoted the first paragraph, but there is more.

One the topic procedural generation; rogue likes are all about it and new generation Diablo like games have definitely similar things, well respected new games like Blue Prince. There has never been such as successful period of time for procedural generation in games like now, and all of these are pre-AI. AI powered procedural generation is wet dream of rogue-like lovers

I don't think I agree with this take.

I love procedural generation, and there is definitely a craft to it. Creating a process that generates a playable level or world is just very interesting to explore as an emergent system. I don't think LLMs will make these system more interesting by default. Of course there are still things to explore in this new space.

It's similar to generative/plotter art compared to a midjourney piece of slop. The craft that goes into creating the code for the plotter is what makes it interesting.

The key to non-disruptive LLM integration is using it in a purely additive way, supplementing a feature with functionality that couldn't be done before rather than replacing an existing part. Like adding ai generated images to accompany the dwarf fortress artifact descriptions. It could completely togglable and doesn't disrupt any existing mechanics, but would provide value to those that don't mind the slop.

> No one cares about how the code is written.

I would overstate:

No one even cares how architecture is done. Unless you are the one fixing it or maintaining it.

Sorry, no one. We all know Apple did some great stuff with their code, but we care more about the awful work done on the UI, right? I mean - the UI seems to not be breaking in these new OSs which is amazing feature... for a game perhaps, and most likely the code is top notch. But we care about other things.

This is the reality, and the blind notion that so-many people care about code is super untrue. Perhaps someone putting money on developers care, but we have so many examples already of money put on implementation no matter what the code is. We can see everywhere funds thrown at obnoxious implementations, and particularly in large enterprises, that are only sustained by the weird ecosystem of white-collar jobs that sustains this impression.

Very few people care about the code in total, and this can be observed very easy, perhaps it can be proved no other way around is possible.

This is overstating it. Computers are amazing machines, and modern operating systems are also amazing. But even they cannot completely mask the downstream effects of poor quality code.

You say you don't care, but I bet you do when you're dealing with a problem caused by poor code quality or bad choices made by the developer.

Also RE: procgen, one of the hit games right now, Mewgenics, is doing super well and uses it extensively. Obviously it's old school procgen that makes use of tons of authored content, but it's still procgen.

[deleted]

> Players only object against AI art assets. And only when they're painfully obvious.

Restaurant-goers only object against you spitting in their food if it's painfully obvious (i.e. they see you do it, or they taste it)

Players are buying your art. They are valuing it based on how you say you made it. They came down hard on asset-flipping shovelware before the rise of AI (where someone else made the art and you just shoved it together... and the combination didn't add up to much) and they come down hard on AI slop today, especially if you don't disclose it and you get caught.

At least to some extent, the anti-ai folks don't care about ai assisted programming because they see programmers as the "techbro" boogieman pushing ai into their lives, not fellow creatives who are also at a crossroads.

An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.

I can type up what I want much faster and be sure it's at least solving the right problem, even if it may have bugs.

There are also tools to generate boilerplate that work much much better than LLMs. And they're deterministic.

If you do not plan out the architecture soundly, no amount of prompting will fix it if it is bad. I know this because my "handmade" project made with backward compatibility and horrible architecture keeps being badly fixed by LLM while the ones that rely on preemptive planning of the features and architecture, end up working right.

LLM's keep messing up even on a plain Laravel codebase..

I think that's true, but something even more subtle is going on. The quality of the LLM output depends on how it was prompted in a way more profound than I think most people realize. If you prompt the LLM using jargon and lingo that indicate you are already well experienced with the domain space, the LLM will rollplay an experienced developer. If you prompt it like you're a clueless PHB who's never coded, the LLM will output shitty code to match the style of your prompt. This extends to architecture, if your prompts are written with a mature understanding of the architecture that should be used, the LLM will follow suit, but if not then the LLM will just slap together something that looks like it might work, but isn't well thought out.

This is magical thinking.

LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking.

> An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.

This reads like a skill issue on your end, in part at least in the prompting side.

It does take time to reach a point where you can prompt an LLM sufficiently well to get a correct answer in one shot, developing an intuitive understanding of what absolutely needs to be written out and what can be inferred by the model.

I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”. Or, to be a bit less generous, consider the possibility that the code you’re generating is boilerplate, marshaling, and/or API calls. A facade of perceived complexity over something that’s as complex as a filter-map or two.

Sharing my 2 cents.

In the past 2 months I've been using all the SOTA models to help me design a new DSL for narrative scripting (such as game story telling) and a c# runtime implementation o the script player engine.

The language spec and design is about 95% authored by me up to this point; I have the LLMs work on the 2nd layer: the implementation specs/guidelines and the 3rd layer: concrete c# implementation.

Since it's a new language, I consider it's somewhat new/novel tasks for LLMs (at least, not like boilerplate stuff like HTTP API or CRUD service). I'd say, these LLMs have been very helpful - you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design - but they are mostly smart enough to carry out the objectives, and they get better and better after the project got on track and has plenty of files/resources to read and reference.

And I'd also say "prompt better" is a important factor, just much more nuanced/complicated. I started with 0 experience with LLM agents and have learned a lot about how to tame them, and developed a protocol to collaborate with agents, these all comes from countless trial and errors, but in the end get boiled down to "prompt better".

I wonder if my intuition here is correct; I would posit that “PL implementation” is a far more popular and well-explored field than it seems. How many toy/small/labor-of-love langs make it to Show HN? How many more simply don’t?

I’ve never personally caught the language implementation bug. I appreciate your perspective here.

I totally agree, and I was fully aware of how common people make language for fun when I replied.

But I feel like the rationale would still stands: Considering LLMs' natures, common boilerplate tasks are easy because they can kind of just "decompress" from training data. But for a new language design, unless the language is almost identical to some other captured by the model, "decompression" would just fail.

When web search first arrived, the same thing happened. That is, some people didn't like using the tool because it wasn't finding what they wanted. This is still true for a lot of folks today, actually.

It's less "git gud; prompt better", and more, "be able to explain (well) what you want as the output". If someone messages the IT guy and says "hey my computer is broken" - what sort of helpful information can the IT guy offer beyond "turn it on and off again"?

> I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”.

1. Personal experience. Lazy prompting vs careful prompting.

2. They're coincidentally good at things I'm good at, and shit at things I don't understand.

3. Following from 2, when used by somebody who does understand a problem space which I do not, they easily succeed. That dog vibe coding games succeeded in getting claude to write games because his master knew a thing or two about it. I on the other hand have no game Dev experience, even almost no hobby experience with games specifically, so I struggle to get any game code that even remotely works.

Irrespective of the domain you specifically listed in 3 (game dev is, believe it or not, one of the “more complex” domains), you have completely failed to miss the point.

> 2. They're coincidentally good at things I'm good at, and shit at things I don't understand.

This may well be! In the perfect world this would be balanced with the knowledge that maybe “the things you’re good at” are objectively* easier than “things you don’t understand”. Speaking for myself, I’m proficient in many more easy things than hard things.

*inasmuch as anything can be “objectively” easier

[deleted]

The parent is specifically talking about producing boilerplate code -a domain in which LLM excell at- and not having had any success at that. It's therefore not a leap of logic to assume they haven't put (enough) effort into getting better at prompting first, which is perfectly fine per se but leans towards a skill issue and not an immutable property of gen AI.

The uncomfortable fact remains that one cannot really expect to get much better results from an LLM without putting some work themselves. They aren't magical oracles.

>No one cares about how the code is written.

People definitely do care. Nobody wants vibe-coded buggy slop code for their game.

They want well designed and optimized code that runs the game smoothly on reasonable hardware and without a bunch of bugs.

Your second paragraph does not follow, at all, from the first. These are completely orthogonal demands.

The gaming industry is absolutely overwhelmed with outrageously inefficient, garbage, crash-prone code. It has become the norm, and it has absolutely nothing to do with AI.

Like https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times.... That something so outrageously trash made it to a hundreds-of-million dollar game, cursing millions to 10+ minute waits, should shame everyone involved. It's actually completely normal in that industry. Trash code, thoughtless and lazily implemented, is the norm.

Most game studios would likely hugely improve their game, har har, if they leveraged AI a lot more.

No one wants _buggy slop code_ for their game, but ultimately no one cares whether is has been hand crafted or vibe-coded.

As proof, ask yourself which of the following two options you would prefer:

1. buggy code that was hand-written 2. optimized code that was vibe-coded

I'll bet most people will choose 2.

I've never seen something as complex as a video game vibe coded that was actually well optimized. Especially when the person doing the prompting is not a software developer.

So I personally do care and I am someone, so the answer is not no one.

> Spore is well acclaimed.

Spore was fun (IMHO) but at the time of release was considered a disappointment compared to its hype.

localization? Why would you oppose LLMs doing localization?

I guess the chain of reasoning would be: AI for art is bad -> Writing is art -> Translation is writing.

Personally, I do appreciate good localisation, Nintendo usually does a pretty impressive job there. I play games in their original language as long as I actually speak that language, so I don't have too many touch points with translations though.

In case they hallucinate? There's no point having content in a wide variety of languages if it's unpredictably different from the original-language content.

> Spore is well acclaimed

And yet it also effectively ended Will Wright's career. Rave press reviews are not a good indicator of anything, really.

Tbf Spore's acclaim comes with the caveat that it completely failed to live up to years of pre-release hype. Much of the goodwill it's garnered since, which is reflected in review scores, only came after the storm of controversy over Spore not being "the ultimate simulator which would mark the 'end of history' for gaming" died down.

And you wouldn't really have any idea this was the case if you weren't there when it happened.

> Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing

Do you ever ask why you're writing the same thing over and over again? That's literally the foundational piece of being an engineer; understanding when you're reinventing the wheel when there's a perfectly good wheel nearby.

When you make a function

  f(a, b, c)
 
It is reusable only if simply changing a, b, c is enough to give the function that you want. Options object etc _parameterise_ that function. It is useful only if the variability in reuse you desire is spanned by the parameters. This is syntactic reuse.

With LLMs, the parameterisation goes into semantic space. This makes code more reusable.

A model trained on all of GitHub can reuse all that code regardless of whether they are syntactically reusable or not. This is semantic reuse, which is naturally much broader.

There are two important failures I see with this logic:

First, I am not arguing for reusability. Reusability is one of the most common mistakes you can make as a software engineer because you are over-generalizing what you need before you need it. Code should be written for your specific use case, and only generalized as problems appear. But if you can recognize that your specific use case fits a known problem, then you can find the best way to solve that problem, faster.

Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.

Coding is a constant balance between understanding what you're solving for and what can solve it. Using LLMs takes the worst of both worlds, by offloading both your understanding of the problem and your understanding of the solution.

> Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.

If you are anything above a mid level ticket taker, your responsible exceeds what you personally write. When I was an “architect” responsible for the implementation and integration work of multiple teams at product companies - mostly startups - and now a tech lead in consulting, I’m responsible for knowing how a lot of code works and I’m the person called to carpet by the director/CTO then and the customer now.

I was responsible for what the more junior developers “vomit out”, the outside consulting company doing the Salesforce integration or god forbid for a little while the contractors in India. I no more cars about whether the LLM decided to use a for loop or while loop than I cared about the OSQL (not a typo) that the Salesforce consultants used. I care about does the resulting implementation meet the functional and non functional requirements.

On my latest two projects, I understand the customer from talking to sales before I started, I understand the business requirements from multiple calls with the customer, I understand the architecture because I designed it myself from the diagrams and 8 years of working with and (in a former life at AWS) and reviewing it with the customer.

As far as reusability? I’ve used the same base internal management web app across multiple clients.

I built it (with AI) for one client. Extracted the reusable parts and removed the client specific parts and deployed a demo internally (with AI) and modified it and added features (with AI). I haven’t done web development since 2002 seriously except a little copy paste work.

Absolutely no one in the value chain cares if the project was handcrafted or written by AI - as long as it was done on time, on budget and meets requirements.

Before the gatekeeping starts, I’ve been working for 30 years across 10 jobs and before that I was a hobbyist for a decade who started programming in 65C02 assembly in 1986.

I am not talking about using an LLM to make code reusable in the sense youre arguing.

My point is that the very act of training an LLM on any corpus of code, automatically makes all of that code reusable, in a much broader semantic way rather than through syntax. Because the LLM uses a compressed representation of all that code to generate the function you ask it to. It is like having an npm where it already has compressed the code specific to your situation (like you were saying) that you want to write.

[deleted]
[deleted]

> I don't know how one can spins this as a bad thing.

People spin all kinds of things if they believe (accurately or not) that their livelihood is on the line. The knee-jerk "AI universally bad" movement seems just as absurd to me as the "AGI is already here" one.

> Spore is well acclaimed. Minecraft is literally the most sold game ever.

Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.

As I see it, it's all a matter of how well it's executed. In the best case, a skilled artist uses automation to fill in mechanical rote work (in the same way that e.g. renaissance artists didn't make every single brushstroke of their masterpieces themselves).

In the worst (or maybe even average? time will tell) case, there are only minimal human-made artistic decisions flowing into a work and the output is a mediocre average of everything that's already been done before, which is then rightfully perceived as slop.

> Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.

Is that even a counter point? Nobody in their right mind would ever claim that procedural generation is impossible to fuck up. The reason Minecraft/etc are good examples is because they prove procedural generation can work, not that it always works.

True, I should have said "counterexample". Procedural generation is just another tool, in the end, and it can be used for great or mediocre results like any other.

> Oblivion, one of the first high-profile games to use procedural terrain/landscape generation

I might be misremembering but wasn't the Oblivion proc-gen entirely in the development process, not "live" in the game, which means...

> "In the best case, a skilled artist uses automation to fill in mechanical rote work"

...is what Bethesda did, no?

Yes, but I beg to differ on the "skilled" part. I find the result very jarring somehow; the scale of the world didn't seem right. (Probably because it was too realistic; part of the art of game terrain design is reconciling the inherently unrealistic scales.)

WoW had this but you never really thought about it - even the massive capital cities were a few blocks at most.

The problem with procedural generation is it's hard to make it as action-packed and desirable as WoW zones, and even those quickly become fly-over territory.