The link doesn't work.

Also missiles already use AI to know where they are, so I'm skeptical that the headline is true.

That's the previous generations of AI which has been rebranded to "machine vision" and "machine learning" rather than "AI" due to the numerous hype crashes making the term AI unpalatable for a generation.

It's materially different than "Generative AI" which is the current trend of AI hype, which is what I think the "AI" in the title is referring to.

Unlikely to be Gen AI. The military applications of missiles that can generate LinkedIn posts about B2B marketing as they descend on their targets are probably quite limited.

How about a multimodal model that looks at sensor inputs and decides about what it sees being the actual target and what might be a decoy, and generates guidance commands to the real target? Does that sound like a military application of the GenAI capabilities we know exist today with vision and computer use?

Or perhaps an AI with a tactics/strategy prompt that watches the statuses/locations of several drones and coordinates their actions to achieve an overall objective? Does that sound like a military application that the military could be working on?

[deleted]

You’re not using your imagination enough. Warfare is more than just munitions.

In this instance the article does say "An ambitious Pentagon plan to field thousands of cutting-edge drones to prepare for a potential conflict with China has [...] struggled to find software that can successfully control large numbers of drones, made by different companies, working in coordination to find and potentially strike a target—a key to making the Replicator vision work."

So these particular "AI weapons" would appear to be munitions.

Yes, I see great potential in injecting your AI into the enemies communication system. Being able to have an AI try and persuade your enemy to do things in your favor, confuse them, or censor information all processed in real time and potentially at scale of the enemy's entire army is very powerful. It could even take a passive role and serve as pure intelligence gathering of the current state of things.

Aside from the potential scale, those arent really new ideas. The scale could actually be a hindrance. Once it's used, it's future utility drops dramatically. Kind of how the intelligence community don't want to burn their zero days or exploits for low value operations. Even utilizing intelligence frorm passive opperation can tip them off.

I feel like a lot of those other uses are more aligned to the three letter agencies for intelligence and influence if were talking mostly about gen AI. I assume the next best place (excluding munitions and their delivery systems) would be cyber operations. But this realm is touchy and the leaders don't want to start a shooting war with cyber retaliation/strikes. The oversite, need for human in the loop, and aversion to collateral damage make AI weapons difficult to develop and deploy, especially if we earent counting older computer vison etc. It's no surprise the military is having trouble developing and deploying AI weapons in that environment.

You can use Gen AI to generate actuator inputs.

[deleted]

The link works for me.

> The Pentagon has also struggled to find software that can successfully control large numbers of drones, made by different companies, working in coordination to find and potentially strike a target—a key to making the Replicator vision work.

So the software can't work with arbitrary drones. The article also talks about the high cost of some of the drones.

> Of the dozen or so autonomous systems acquired for Replicator, three were unfinished or existed only as a concept at the time they were selected, according to people with knowledge of the matter. Among Replicator’s shortcomings, officials said, is that the Defense Innovation Unit was directed to buy drones that had older technology, and it didn’t rigorously test platforms and software before acquiring them, other people familiar with the matter said.

So the military bought promises and basically funded some research. That's fine imo, they do that all the time, but their expectations did not align with results in these cases. And they didn't set good requirements for the platforms.

I expect the hopes for AI-driven drones with the ability to target individual humans by identity is probably not quite here yet. You have to get around jamming, fit any tech on a small platform, and it has to be cheap and disposable. And you don't actually want "AI", because you don't want it to mistakenly kill civilians, you want highly accurate computer vision.

In Russia and Ukraine, they are manually piloting drones that are attached by fiberoptic cable. It's cheap and effective, but requires a human pilot. At least for now, I would guess this is a much more effective (in results and cost) way to go. A human can pilot dozens of disposable drones in a day that drop their payload and are then discarded.

> because you don't want it to mistakenly kill civilians

says who? the US military is completely fine with mistakenly killing civilians

> want to

> fine with

In military theater it’s an important distinction.

But really, you just wanted to post a comment trashing the US. It didn’t add to the conversation.

Archive: <https://archive.is/JTx8p>

(Origin works for me as well.)

Missiles famously know where they are not.

https://youtu.be/bZe5J8SVCYQ

> Missiles famously know where they are not.

what if it misses a few locations where they are not?

Redirects to home in Android FF with UBO. Chrome loads the article on the same device.