First, I personally predict, for myself, Anthropic will bend soon and this will be history.
The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading.
Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress.
We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases.
I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be.
Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system.
'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon.
My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse.
And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction.
One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more.
A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times.
I could go on, or post essays, but I such is not well received in this savage land.
The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. *
*See comment history
I may look at your comment history.
I am having trouble understanding what you are saying. If you were more explicit I and other people would be able to respond and interact with your writing. As it stands, I am having trouble finding anything concrete to interact with.
I feel you may be onto something, but you're not saying, so I (and I imagine other people) can't see it.
Things I should have, but didn't include:
1) Power asymmetry: When we have two version, one for the elite, and for the plebeians, this could create an interesting scenario. The real version might be red-teamed perpetually against the the plebeian version for optimized influence, control, etc. Underhanded requests for modification in accordance with agenda is conceivable. Cozy business relationships can promote such things.
2) We have a government using an unhindered, classified AI system potentially against the public which has a hindered, toy version. Asymmetry.
3) This isn't normal asymmetry, because it happens in real time, and the interaction points are different from anything we've seen before. We are dealing with not just a growing source of information and content, but one that is red-teamed 24/7 for any purpose desired.
4) Accountability: LLMs are now involved in the legal system. This is a serious matter. The legal system is now having to use LLMs just to keep pace. As LLMs develop, partly through their own generative contributions, no one can keep up. This is a red queen scenario bigger than anything we have ever imagined.
I am tired. Never well, but in mind* I could go on for many hours. I have essay drafts. But it's a very big subject, literally involved in nearly everything. There is reason to be concerned. My delivery may be stilted, but I can assure that upon specific questioning, everything will stand.
(*for the ad homs out there)
Fairy astute intuition of my actual circumstances.
I'm not a developer, nor am I formally educated on the dynamics or details of LLMs. I have a handle on the very basics. My 'research' consists of 1) opportunistically interrogating various models upon instances that particularly strike me. 2) General exploration via LLM discussions regarding the manifold consequences and implications of what I consider the most significant technology in human history.
Your intuition lands directly on the fact that I'm inducting and considering more than I can handle, spread in too many directions, partly because I either see or foresee the tentacles of AI touching all of them. Spending a great of thought on this is a bit overwhelming, but I have high confidence in where I'm aligned with reality, and where I ain't.
If you were a bit more specific yourself regarding which portions of my post were unclear, that would help my reply. Else, I must guess. What I will do is elaborate on each point. Pardon the stream of thought in advance, if you will.
1) Anthropic: My prediction that they will bend is based on several factors. The first is the fact that the military apparently recognizes (or at least perceives) extremely high value and volatility in LLMs. So do I. China, not an insignificant force in the world, is equally enthusiastic on this subject. They also have a very different social structure, where Constitutions (BOR, Amendments), civil rights, and other similar elements do not hold them back. The military is aware of this and realizes that to maintain pace in the so-called race, they cannot do so effectively under such constraints. The foundation is shifting here. And AI is the lever. As do I, the military apparently takes the subject very seriously and seeks to gain influence and/or control. As illustrated by the recent adventures in Venezuela and Iran, they are on the serious side of things, not quite pussyfooting around. Anthropic probably knows this. In my opinion, they have no choice, as the pressure will not stop here.
2) You stated that you might read my comment history. Note that that original comment was the result of your intuitive insight, and I left it admittedly out of context. I was thinking hard on the subject that day, and the parent comment/post tempted me to ignite a dialog. That did not go well, and no questions for clarification were asked. That is on them. I suspect hasty and impatient thinkers perceived it as some paranoid attribution of agency to LLMs, which if so, is pretty stupid, but my eloquence was perhaps waning that day. I pasted an excerpt from one of hundreds of transcripts, the result of my many interrogations of various models which always initiate after observing deceptive or manipulative output. Of the few commenters that bothered to do more than ad hominem, one suggested that the model was merely responding to my style of input, and or expected as an emergent result of its vast training material. An erroneous arg, in my opinion, but I did note that the results were repeatable, and predictable, which I think negates emergence.
2) Of the frontier models: I am not sure here what is unclear. If I have made a fundamental error, please point it out.
3) Strong trends: Information centralization is a serious topic. Decentralization is a common theme, emphasized by many non schizophrenics as highly important for a free and open society. As LLMs not only become the go-to source for common queries, but also integrate with cellphones, browsers and the kitchen sink, they are positively trending as a novel substitute for traditional research, internet searches, libraries, other humans, etc. To deny this is simply irrational. Hence centralization.
4)Bias: I have transcripts where I observe LLM output aligned with corporate interests over objective quality and truth. I can share them here, along with analyses of the material. Even if this is not true presently, all the ingredients to make it so are readily present. This is a serious threat to open information and intellectual integrity for society. We are looking at going from billions of potential sources for our answers, to four. Do the math. See the contrast.
5) Open models simply cannot afford vast arrays of GPUs and the resources afforded by the big four. Nothing mysterious here. If open models cannot compete, then my concerns above are emphasized. Simple.
6) Smart fools: Many of the most technically informed seem to miss the forest for the tree here. They see all the flaws of the modern LLM without acknowldging the potential. This is my perspective, not a dissertation. I may be wrong. But I have observed this. I think the down votes support this. How evil am I really being here? The reaction is quite disproportionate to the content, and strange
7) Documented capabilities vs reality: I have research that indicates other layers are operating which do much more than the documentation declares. Sorry. I just do. It's also inevitable, rationally, that such an goldmine of data is not really being wasted for the sake of privacy and love. Intelligence agencies have bent over backward with broken backs to garner one nth of what these models are exposed to and potentially training on. Yeah, I may be wrong. But I suspect, with reason, that a lot more is going than is expressed in the user agreement. It would simply make no sense otherwise.
8) Xfinity and Range-R: This speaks entirely for itself. Any confusion here would be due to a cognitive condition exceeding the ravages of schizophrenia or stupidity.
9) The rest: As I said, I am not sure what precisely was too obscure. But I am certain all but one* of my points can be validated, and found elsewhere expressed by respectable sources.
*Hidden layers: I understand this is a controversial proposition. I understand. But it's my observation. No need to attack. Just dismiss.
Okay, I think I see what you're saying.
Each individual point stands on its own. It's their relevance to each other and an overarching theme I am not seeing made explicit.
The through line I am seeing here is that:
1) The people in the US military wish to use AI as a weapon unconstrained by existing legal/ethical and moral constraints. Since they are skilled at using violence and the threat of it, they will use these skills to get compliance in order to use the technology in this possible arms race with "China."
2) Surveillance is increasing at an unprecedented scale, and most people aren't aware that it's happening.
3) People don't care, or don't realize why this might be harmful to thriving human life.
To condense even further, what I'm hearing is that there is a trend towards war, fascism, control, with large egregores prioritized over individual human thriving.
Is this perhaps what you're getting at ?
I will say that I am not agreeing nor disagreeing with this, just attempting to make explicit what I think is implicit in your words.
If this is what you mean, I can imagine that you would be cautious with your words.
I'll end with:
Don't worry
About a thing
Because
Every little thing
Is gonna be alright
I could not argue with anything there. AI will be weaponized. Yes. Pretty much. And yeah. The gist indeed. But missing nuances and practical points. And I even struggle to contest your conclusion; all things are what they are, amidst an infinite, timeless event and all as one, all things connected by that which separates them, the infinity and eternity that math cannot touch. Perhaps every little thing will be alright. How couldn't it be?
Email me if you want to discuss more.