> But also… we got the tablets from Star Trek.

They regularly used multiple tablets at a time, stacked like papers. What we have is presumably superior from a technological standpoint. Except their tablets weren’t filled with time-wasting features designed to keep you addicted and distracted.

> And now we have the ship’s computer from Star Trek

No, we definitely do not. If every time they spoke to the ship’s computer it made up answers at the rate LLMs do, they would have either stopped using it or would all be dead.

And you’re ignoring we’re also in the stages of getting the surveillance from 1984 and the social class divide from Brave New World. Those are not good tradeoffs.

> They regularly used multiple tablets at a time, stacked like papers.

If iPads were sold in every store for $1 a piece, we'd be doing that too. This is indeed a technology problem (or at least half-technology, half-economics), we just can't make working tablets cheap enough (and sustainably enough) to support such workflow.

> What we have is presumably superior from a technological standpoint.

The writers were surprisingly prescient about this. Turns out, the secret about paper-based workflow isn't that a sheet of paper can display anything, but that you can have a lot of them, freely arrange them in front of you as you need, pass them around, pin up the wall, etc. Multitasking on a single swab is strictly inferior to that.

EDIT:

>> And now we have the ship’s computer from Star Trek

> No, we definitely do not. If every time they spoke to the ship’s computer it made up answers at the rate LLMs do, they would have either stopped using it or would all be dead.

We definitely do, and this, somewhat unexpectedly, got us to the point of being close to having a basic universal translator as well.

Computers on Star Trek ships weren't built for conversations, and weren't talked with as a regular thing for basic operations, so it wasn't like chatting with LLMs. There wasn't much opportunity to hallucinate - mostly simple queries, translating directly to something you'd consider a "tool call" today. But that's not the actually notable part.

The notable, if underappreciated, part of Star Trek's computers is that they understood natural language and intent. They could handle context and indirect references and all kinds of phrasings. This was the part we didn't know how to solve until few years ago, until LLMs unexpectedly turned out to be the solution. Now, we have this.

(Incidentally, between LLMs and other generative models, we also have all the major building blocks of a holodeck, except for the holographic technology.)

> If iPads were sold in every store for $1 a piece, we'd be doing that too.

Considering how bad Apple is at syncing, that’s just asking for trouble. You’d never know where anything is or what iPad has what or if it’s the current version. Not to mention the charging situation and all the e-waste.

> The notable, if underappreciated, part of Star Trek's computers

Under appreciated by whom? It’s one of their defining features. Are you talking about the real world or the characters?

> is that they understood natural language and intent.

Which LLMs do not. They fake it really well but it’s still an illusion. No understanding is going on, they don’t really know what you mean and don’t know what the right answer is. The ship’s computer on Star Trek could run diagnostics on itself, the ships, strange life forms and even alien pieces of technology. The most advanced LLMs frequently fail at even identifying themselves. I just asked GPT-5 about itself and it replied it’s GPT-4. And if I ask it again in five minutes, it might give me a different answer. When the Star Trek computers behaved inconsistently like that (which was rare, rather than the norm), they would (rightly) be considered to be malfunctioning.

> Which LLMs do not. They fake it really well but it’s still an illusion. No understanding is going on, they don’t really know what you mean and don’t know what the right answer is.

A tree falling in a forest with nobody to hear it: if it makes a "sound", you think "sound" is the vibration of air; if it does not, you think "sound" is the qualia.

"Understanding" likewise.

> The ship’s computer on Star Trek could run diagnostics on itself, the ships, strange life forms and even alien pieces of technology.

1. "Execute self-diagnosis script" doesn't require self-reflection or anything else like that, just following a command. I'd be surprised if any of the big AI labs have failed to create some kind of internal LLM-model-diagnosis script, and I'd be surprised if zero of the staff in each of them has considered making the API to that script reachable from a development version of the model under training. No reason for normal people like thou and I to have access to such scripts.

2. Not that the absence says much. If humans could self-diagnose our minds reliably, we wouldn't need therapists. This is basically "computer, send yourself to the therapist and tell me what the therapist said about you".

> When the Star Trek computers behaved inconsistently like that (which was rare, rather than the norm), they would (rightly) be considered to be malfunctioning.

Those computers (and the ships themselves) went wrong on such a regular basis on the shows, that IRL they'd be the butts of more jokes than the Russian navy.

> Would we? What for? Why would we need reams of iPads on our desks?

To use like we'd use paper.

> Which LLMs do not. They fake it really well but it’s still an illusion. No understanding is going on, they don’t really know what you mean

That is very much up to debate at this point. But for practical purposes in context described here, they do.

> and don’t know what the right answer is.

They're not supposed to. This is LLM use 101 - the model itself is behaving much like a person's inner monologue, or like a person who just speaks their thoughts out loud, without filtering. It's very much not a database lookup.

> I also disagree that was an underappreciated featured of the Star Trek computers, it’s one of their defining features.

What I meant is, people remember and refer to Star Trek's ship computer for its ability to control music, lights or shoot weapons, etc. with voice commands. People noticed the generality, the shamelessness of interaction, lack of structured command language - but rarely I saw anyone paying deeper attention to the latter, enough to realize the subtle magic that made it work on the show. It wasn't just some fuzzy matching allowing for synonyms and filler words, but more human-like understanding of the language.

(Related observation: if you pay attention to sliding doors on Star Trek vs. reality, you eventually realize that Starfleet doesn't just put a 24th century PIR into the door frame; for it to work like it does on the show, the computer has to track approaching people and predict, in real time, whether or not they want to walk through the door, vs, just passing by, or standing next to them, etc. That's another subtle detail that turns into general AI-level challenge.)

> The ship’s computer on Star Trek could run diagnostics on itself, the ships, strange life forms and even strange pieces of technology.

That's obviously tool calls :). I don't get where this assumption comes from, that a computer must be a single, uniform blob of compute? It's probably because people think people are like this, but in fact, even our brains have function-specific hardware components.

(I do imagine the scans involve a lot of machine learning and sensor fusion, though. That's actually how "life signs" can stop being a bullshit shorthand.)

> The most advanced LLMs frequently fail at even identifying themselves.

They'll stop when run with a "who am I?" tool.

> When the Start Trek computers behaved inconsistently like that, they would (rightly) be considered to be malfunctioning. Yet you’re defending this monumental gap as being effectively the same thing. Gene Roddenberry must be spinning in his grave.

All I'm saying is, LLMs solved the "understand natural language" problem, which solves the language and intent recognition part of Star Trek voice interfaces (and obviously a host of other aspects of computer's tasks that require dealing with semantics). Obviously, they're a very new development and have tons of issues that need solving, but I'm claiming the qualitative breakthrough already happened.

Obviously, Star Trek's computer isn't just one big LLM. That would be a stupid design.

> To use like we'd use paper.

How we use paper derives not only from our own practical needs, but also from the intrinsic limitations of paper. Stacks of paper are used because it's not possible to put several pages worth of text onto a single page of paper while maintaining a legible font size. The idiosyncratic way that tablets were used in Star Trek isn't how people would actually do things, it merely reflects the limitations of the writers to imagine all of the practical implications of technology such as they were depicting. It would be like somebody in the 1800s speculating about motor vehicles, supposing that teams of a dozen or more motor vehicles might be connected using ropes and used to tow a single carriage, because that's how they did it with horses.

> To use like we'd use horses.

> Stacks of paper are used because it's not possible to put several pages worth of text onto a single page of paper while maintaining a legible font size.

Right. And trying to replace a stack of paper with one paper sheet-sized screen is a significant downgrade. Which is why tablets are used primarily for entertainment, not for work.

Having lots of sheets of paper you can spread out around you is an advantage, not a limitation, of the paper-based workflow.

No, a single screen is a massive upgrade over using stacks of paper.

People vastly prefer digital dictionaries over paper dictionaries because you can more quickly find stuff. And that’s with dictionaries in alphabetical order.

Stacks of paper suck, there’s some potential utility in a space ship for all the redundancy around independent tablets you can hand someone. That’s something that regularly happens on the show and kind of makes sense, but is more a visual reference for the audience. Which is where stacks of tablets shine, the viewer can easily follow what their doing even if you can’t see the screen.

It can be true that stacks of paper are better than a single screen in some ways and worse in others. Other people like to be able to spread out multiple sheets of paper in front of them, even if you do not. You are correct that digital search is a huge plus of having a digital interface.

If we’re talking 3-5 pieces of one sided paper for say homework you can spread them out nicely, but scale that to multiple stacks of loose paper and it invariably becomes a mess.

Thus, in practice almost everyone is using multiple screens at work when they can even if printing stuff is trivial.

> They're not supposed to. This is LLM use 101

> (…)

> Obviously, Star Trek's computer isn't just one big LLM. That would be a stupid design.

Or, in other words, we don’t have Star Trek’s computer like originally claimed, and our current closest solution isn’t the way to get it.

Your takeaway presumes that all LLM interactions are monolithic, which is the opposite of what was being claimed if you take the other poster's comments about tool use into consideration. I have no real investment in this conversation though, so your proclamation of winning can stand as far as I'm concerned.

> Or, in other words, we don’t have Star Trek’s computer like originally claimed, and our current closest solution isn’t the way to get it.

My computer can both run an LLM (albeit a bad one, only has 16 GB of RAM) and also run other things at the same time.

> That's obviously tool calls :). I don't get where this assumption comes from, that a computer must be a single, uniform blob of compute? It's probably because people think people are like this, but in fact, even our brains have function-specific hardware components.

In fairness, half the time the Trek computer does something weird, it only makes sense if there's no memory/process isolation and it's all one uniform blob of compute. Made sense in the 60s where Spock's chess app losing to him was useful evidence that the CCTV recordings had been faked, not so sensible in 2025 when the ship stops being able to navigate due to the excess system demand from the experimental holodeck.

> If iPads were sold in every store for $1 a piece, we'd be doing that too. This is indeed a technology problem (or at least half-technology, half-economics), we just can't make working tablets cheap enough (and sustainably enough) to support such workflow.

The price of a lowend Android tablet can be shockingly low, to the point that physical multitasking is totally practical for an environment as expensive as space travel. The issue is bloat. The UI for a Trek level starship could easily run on 1999 era PC hardware much less powerful than a 2025 postage stamp of an SOC, if we were still coding like it was 1999. But not if it has to run Android Infinity with subpixel AI super resolution, a voice interface, and no less than 70MB of various JavaScript frameworks crammed into a locked Chromium frontend.

I run a Motorola mobile device at work (retail) that would be competitive with 10-15 year old flagship phones. The browser interface is designed for tracking and ease of development and to show off new AI tools. It employs landing pages, phased loading, a bunch of dynamic things. Looking up a SKU number takes 2-5 minutes (MINUTES) to load things I could get in ten milliseconds on a console interface or hundreds of milliseconds in a 1999 World Wide Web e-commerce site.

> There wasn't much opportunity to hallucinate - mostly simple queries,

Depends which Star Trek series you are talking about; early TNG routinely had complex request for new research/analysis/hypothesis generation and evaluation; if it came out today we’d accuse Starfleet of being infected with vibe crewing...

[deleted]

It wasn’t a tablet but there was a TNG episode about a time wasting game that nearly took over the minds of the entire crew.

They foresaw tiktok and their ilk way in advance.

We are also getting a Multipolar order, so a single government Earth is pretty far away, and so too with the 1984 Mega Governments. Surveillance is increasing in developed societies, but there are multiple underdeveloped nations that are currently under siege by their own citizens themselves. Not to mention we're unsure if these developed nations won't suddenly implode or come under fire by their own citizens to invoke mass change.

>Except their tablets weren’t filled with time-wasting features designed to keep you addicted and distracted.

>And you’re ignoring we’re also in the stages of getting the surveillance from 1984 and the social class divide from Brave New World.

Those are more human problems than technological problems.

They are human problems caused or exacerbated by misuse of technology. Mass surveillance and accruement of capital by private entities at this scale is only possible due to automation.

What difference does it make, anyway? The distinction is meaningless when the result is the same.

https://youtu.be/lBS9AHilxg0?t=36

Imo the distinction is very important in this context because there's a lot of technophobia and cynicism in general discourse, to the point where new developments in technology are often immediately met with extremely pessimistic takes re exploitation.

It's good to remind ourselves from time to time that new developments in technology have both positive and negative potential, and how they're applied is largely due to sociological factors. When we dissociate the issues with "technology" we allow ourselves to see the underlying issues causing potential misuse, making progress at solving those problems possible instead of a knee-jerk negative lashback against anything new.

None of the problems I referenced are “new developments” in technology. The Snowden revelations happened over a decade ago. We know that Facebook hid their discoveries on social media harm, like a tobacco company.

It is patently obvious by now that major developments coming out of big companies will be used to further encroach their dominance at the expense of every one else. It is not a knee-jerk reaction to recognise an obvious pattern and identify probable pathways for abuse. On the contrary, those have to be identified and discussed early if there is any hope of counteracting the problems.

So no, it doesn’t make a difference to distinguish between the technological and human problem when we’re not solving the human problem. It is an excuse which could be applied to anything—technology doesn’t harm by itself, all of it is created by humans. That’s just a variant of “guns don’t kill people, people kill people”. It is important to recognise the role of technology in facilitating and worsening the human problem.

Yes, and in order to identify them, we need to separate the technology from the social context in which it is deployed. That's exactly what I'm saying, glad we agree.

There are a few episodes about holodeck failures due to verbal instructions interpreted incorrectly (notably the Moriarty episode). Ship in a Bottle even has prompt injection when Moriarty figures out he can summon the arch.

Plus, the captains ask tons of questions a computer would know, but only the bridge crew are trusted with.

> Except their tablets weren’t filled with time-wasting features designed to keep you addicted and distracted.

This part I agree with, but is also very easy to fix (use a very old UI system, e.g. direct port of Apple's HyperCard).

On the hardware front, the only thing Trek Padds had that real life can't really do is that Trek's power cells had an energy density on par with "let's say an atomic battery had a way to dial the decay rate up and down at will and were not also a horrifying source of neutron radiation".

> No, we definitely do not. If every time they spoke to the ship’s computer it made up answers at the rate LLMs do, they would have either stopped using it or would all be dead.

“The computer is malfunctioning” has been a plot device in Star Trek since the beginning.

> And you’re ignoring we’re also in the stages of getting the surveillance from 1984 and the social class divide from Brave New World. Those are not good tradeoffs.

Not disagreeing with you at all, but the surveillance on Starfleet vessels and facilities is almost complete and all-encompassing. Real-time location, bodily attributes, eavesdropping, access to all communication and computer data, personal and otherwise, I don't think there's anything that is private in their world! Remember that time The Doctor started a two-way video call with (I think) B'Elanna while she was in the shower? That being said, Starfleet is a paramilitary organisation, perhaps it's less awful in civilian life when one isn't wearing a Comm badge.

I wonder if you and I would consider this degree of invasiveness an acceptable compromise with a life almost completely without illness, any form of capitalism and the opportunity to pursue pretty much any life path we wish, in a society which is largely at peace with itself.

People keep forgetting that the surveillance of 1984 was just the surveillance of socialist countries in the 1940s.

The states were listening to people through their TVs in the 1940s?

They were listening to people through their wall art.

Leon Theramin had invented a radio-activated passive microphone that was used to listen to people from their furniture [0].

The fact that this was only (as far as we know) used to listen in to embassies is more about the economics of scale rather than imagining new technology that didn't exist at the time.

At that scale at that time it was cheaper to have neighbors name and shame people who complained about the government. But there is little really in 1984 that's about the future of technology in the same way Star Trek or even Brave New Word is.

[0] He had also invented a television in the 1920s, which is mostly just trivia related to this question.

Not "just", the inspiration came from many angles: Stalinist USSR, Nazi Germany, Spanish repression of POUM, Wartime Britain (where the shape of the TVs come from) and multiple other dystopian novels.

People seems to forget that Orwell was a anti-Stalinist socialist.

I don't know too much about POUM, but my understanding was that in Homage to Catalonia he's concerned with the Spanish Communist Party's suppression of POUM. So I think that is consistent with what I said above.

I haven't forgotten that Orwell was an anti-Stalinist socialist. But there weren't any anti-Stalinist socialist states at the time.

Indeed, communist repression of socialist ideas. I think the confusion comes from "surveillance of socialist countries in", where it should have been "Stalinist countries", not "socialist countries".

The stacks of tablets were because of DRM (a cybersecurity method to manage the threats of data exfiltration, “Digital Romulan Management”)

The class divide might rather be likened to The Time Machine