Personal as in Meta gets your personal data so they can sell you more ads.

[flagged]

The hero image on the linked page, which consists of a muted teal background with the words "Introducing Muse Spark", weighs in at 3,5MB. I don't even...

"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."

- Hacker News Guidelines https://news.ycombinator.com/newsguidelines.html

I think this speaks to the product release iself

lol it literally took me 2s to google search "optimize image for website" and 10s to upload and get a smaller sized image.

The result for that specific image is: 500kb. 85% decrease in size

An indistinguishable JPG is 170KB. An SVG would be 20KB.

CSS with a linear gradient background would be even smaller :)

You can even automatically do that on your CDN/delivery/web server layer. Or as part of your web deployment pipeline.

Yes, but it might be a little too advance for Meta ;)

But they have personal superintelligence?

Someday our robot overlords will be intelligent enough to ... optimize images!

(But today is not that day.)

And it doesn't even look high-res.

complaining about sand on the beach

It's not sand on the beach, it's garbage on the beach.

I am simply offended. By Meta's lack of sensibilities (or ability) towards use of images on the Web while touting their new flavour of artificial intelligence as a product.

old man shouts at cloud

more like old man shouts at someone else's computer

This really reinforces the idea that the AI race and the Railroad Mania of the 19th century are very similar.

So many different companies are going to have similarly powerful ai that there will be no moat around it and it will be cheap. They will never earn their investment back.

I suspect this is the real reason behind Anthropic limiting subscriptions to their own products and keeping API prices several times higher than comparable models. Applications more sticky than API users and less technical users more sticky than programmers (ie Cowork more sticky than Code).

The moat is in the compute and the energy access.

And further down the line in chips, which is why Elon is building a fab now.

There are plenty of capable models on HuggingFace, yet I have no way of running them.

Give it a few years, or month. Tiny models are getting outrageously good

I wonder if this is why the tech cartel is buying up all the hardware?

If the average user gets convinced they could run LLMs for cheap at home, you cannot trap users in your walled garden anymore.

Exactly. We’ll see the cost of AI continue to drop.

I was saying this for years about Tesla’s FSD - they finally had to give in and drop the price to stay competitive.

That fab will never be delivered. In five years you might see the manufacturing equivalent of a person dancing in spandex.

> which is why Elon is building a fab now

At least he says he's doing that. It doesn't really make sense since you're not going to achieve an advanced node from a standing start in a practical time frame and cost.

Sounds like more Musk flavored vapor.

> It doesn't really make sense since you're not going to achieve an advanced node from a standing start in a practical time frame and cost.

They already announced a partnership with Intel.

"Muse Spark is available now, and Contemplating mode will be rolling out gradually in meta.ai."

How does one get their hands on these models? They are not open-source, right? I go to meta.ai, but it's just a chat interface---no equivalent to codex or claud code? Can you use this through OpenCode? Is meta charging for model access, or is the gathering of chat data a sufficiently large tithe?

"It will be available in private preview via API to select partners, and we hope to open-source future versions of the model."

from Facebook Newsroom: https://about.fb.com/news/2026/04/introducing-muse-spark-met...

I can't think of any "select partners" that would want to use this non-SOTA model. Just put it on OpenRouter.

If Microsoft is a select partner, maybe they could shove it into Copilot for VS or something, but yeah, I'm wondering the same, maybe Apple could be one of their partners too?

TBD it seems. So far the only explained usage pattern is through a Meta product (Whatsapp, Facebook, Instagram).

So to verify their claims and see how strong these models are, the answer is "believe us"?

Note: I'm expressing some skepticism here largely due to how recent rollouts from Meta flopped. Sincerely hoping that they do better this time around!

I assume the answer is try it out in the chat mode? You could run your usual benches through that right

I appreciate that they build this stuff for their own benefit, but I don't want to feed even more of my private info. Hopefully the models will become public or lead to equivalent models from other sources.

Sarcasm aside, tried it (with instant mode), it's an impressive model.

It nailed all the ChatGPT meme gotchas (walk to the carwash, Alice 50 brothers, upside down cup, R's in strawberry, which number is bigger, 9.11 or 9.9?)

I guess all that money poaching OpenAI / Anthropic talent went somewhere...

Now, would I use "Meta Muse Code" or "Muse CoWork" if I have to have a facebook account to all of my developers? Maybe not.

Would I use it via an API key? I might, depends on the pricing!

so since they hard programmed all of the meme gotchas, they built a good model?

The second paragraph starts "Muse Spark is the first step on our scaling ladder and the first product of a ground-up overhaul of our AI efforts. To support further scaling, we are making strategic investments..."

This article is about Meta, not about the user. Who signs off on these? Is the intended audience other people at Meta, not the user?

The article is published primarily to signal to the market that Meta is serious in its efforts to compete in building frontier ai models.

They want to 1) attract talent, 2) tell wall street they can play in this space as well, 3) help employees feel the company is moving in the right direction.

A frontier LLM doesn't apply to their core consumer products.

the blog is the product. investor deck posted as a tech launch

Stock up 9% today, very pleasant for Zuck if you do the math on his net worth :)

I mean, kinda? It's not like Zuck is selling his stock tomorrow, so daily fluctuations in stock price don't really affect him.

It is unfortunate that they decided to stop doing open-weight releases.

What could have been interesting has been reduced to simply another subpar LLM release.

Genuine question: Why release this the day after Mythos? It does not appear SOTA (just based on benchmarks). OpenAI will likely release Spud tomorrow.

That's a really good question, my sarcastic mind thinks that Anthropic rushed the Mythos announcement of fears of Meta stealing their thunder... (I guess someone leaked that, a LOT of anthropic folks are ex meta... so, you know)

Just a speculation, I have no real knowledge about it.

Looks like a lightweight article. But memory usage went from 316MB -> 502 MB when I hit refresh. Not sure why? Any one have any ideas? Why does it need half a gig of ram in the first place?

"we hope to open-source future versions of the model."

Love to see it. Cheers!

Saying nothing about the actual performance of this model, it does strike me how .... minimal(?) this announcement is. Their safety section is like 2 paragraphs about bioweapons. Go look at the reports for OpenAI and Anthropic's model releases. It's like 50+ pages of tests, examples, reports, and benchmarks across a bunch of safety and wellfare metrics.

If Meta wants to be seen as a cutting edge massive lab they need to come across as one instead of looking like a school project version of a frontier model.

Rumor on the ground is that they expected a much stronger model than this one.

llama4 behemoth problems?

Can you elaborate?

I'm struck by all these independent announcements saying "look at our new model that we only spent $N Billion in acquisitions and hardware time to build and operate that's just like those other ones but this one is ours." Because if any of these companies would simply pool resources and work together, and if the government actively participated in providing funds, they'd be able to accelerate AI so much faster. It all feels incredibly wasteful. But I guess that's communism or something.

Competition often foster innovation. Why are they innovating so fast and spending so much money? Because they don’t wanna get behind. If there was no competition at all then there would be much less reason to innovate and spend resources.

Sad to see it's not going to be open source.

Personal Superintelligence made me think this was an open-source model being released and I was excited. Then I continued reading and I'll just wait until the model comes out.

NOTHING about this is personal! No weights were released!

Associated Meta news post with consumer-friendly takes: https://about.fb.com/news/2026/04/introducing-muse-spark-met...

Meta.ai has muse spark

[dead]