This is very much worth watching. It is a tour de force.

Laurie does an amazing job of reimagining Google's strange job optimisation technique (for jobs running on hard disk storage) that uses 2 CPUs to do the same job. The technique simply takes the result of the machine that finishes it first, discarding the slower job's results... It seems expensive in resources, but it works and allows high priority tasks to run optimally.

Laurie re-imagines this process but for RAM!! In doing this she needs to deal with Cores, RAM channels and other relatively undocumented CPU memory management features.

She was even able to work out various undocumented CPU/RAM settings by using her tool to find where timing differences exposed various CPU settings.

She's turned "Tailslayer" into a lib now, available on Github, https://github.com/LaurieWired/tailslayer

You can see her having so much fun, doing cool victory dances as she works out ways of getting around each of the issues that she finds.

The experimentation, explanation and graphing of results is fantastic. Amazing stuff. Perhaps someone will use this somewhere?

As mentioned in the YT comments, the work done here is probably a Master's degrees worth of work, experimentation and documentation.

Go Laurie!

This is a 54 minute video. I watched about 3 minutes and it seemed like some potentially interesting info wrapped in useless visuals. I thought about downloading and reading the transcript (that's faster than watching videos), but it seems to me that it's another video that would be much better as a blog post. Could someone summarize in a sentence or two? Yes we know about the refresh interval. What is the bypass?

Update: found the bypass via the youtube blurb: https://github.com/LaurieWired/tailslayer

"Tailslayer is a C++ library that reduces tail latency in RAM reads caused by DRAM refresh stalls.

"It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules, using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton. Once the request comes in, Tailslayer issues hedged reads across all replicas, allowing the work to be performed on whichever result responds first."

FYI if you have a video you can't be bothered watching but would like to know the details you have 2 options that I use (and others, of course):

1. Throw the video into notebooklm - it gives transcripts of all youtube videos (AFAIK) - go to sources on teh left and press the arrow key. Ask notbookelm to give you a summary, discuss anything etc.

2. Noticed that youtube now has a little Diamond icon and "Ask" next to it between the Share icon and Save icon. This brings up gemini and you can ask questions about the video (it has no internet access). This may be premium only. I still prefer Claude for general queries over Gemini.

The video could be a shorter, some of the goofiness might not please the most pressed people but that is also what makes it fresh and stand out.

There was nothing goofy about the NERV-logo coffee mug, that was extremely serious business.

> using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton

Seems odd to me that all three architectures implement this yet all three leave it undocumented. Is it intended as some sort of debug functionality or what?

it's explained in the video, and there's no way I'll be explaining it better than her

you could however link to the timestamp where that particular explanation starts. i am afraid i don't have time to watch a one hour video just to satisfy my curiosity.

I've found Gemini useful in extracting timestamps for particular spots in videos. Presumably it works with transcriptions, given how fast it is.

The three answers it found were:

- Avoiding lock-in to them: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1914

- Competitive advantage: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1852

- Perceived Lack of Use Case: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1971

Those points do actually exist in the video, I checked. If there are more, I don't know about them, as I haven't yet watched the rest of the video.

This is approximately the section in the video titled "Memory controllers hate you" (https://www.youtube.com/watch?v=KKbgulTp3FE&t=1399s), combined with the following section.

The actual explanation starts a couple minutes later, around https://youtu.be/KKbgulTp3FE?t=1553. The short explanation is performance (essentially load balancing against multiple RAM banks for large sequential RAM accesses), combined with a security-via-obscurity layer of defense against rowhammer.

As requested:

https://news.ycombinator.com/item?id=47713090

I agree, not everyone has 54 minutes to watch a video full of fluff (I tried, but only got so far, even on 1.5x speed).

Just use the Ask button on YouTube videos to summarize, that's what it's for.

>Just use the Ask button on YouTube videos to summarize,

For anyone confused because they don't see the "Ask" button between the Share and Bookmark buttons...

It looks like you have to be signed-in to Youtube to see it. I always browse Youtube in incognito mode so I never saw the Ask button.

Another source of confusion is that some channels may not have it or some other unexplained reason: https://old.reddit.com/r/youtube/comments/1qaudqd/youtube_as...

Not complaining about the particular presenter here, this is an interesting video with some decent content, I don't find the presentation style overly irritating, and it is documenting a lot of work that has obviously been done experimenting in order to get the end result (rather than just summarising someone else's work). Such a goofy elongated style, that is infuriating if you are looking for quick hard information, is practically required in order to drive wider interest in the channel.

But the “ask the LLM” thing is a sign of how off kilter information passing has become in the current world. A lot of stuff is packaged deliberately inefficiently because that is the way to monetise it, or sometimes just to game the searching & recommendation systems so it gets out to potentially interested people at all, then we are encouraged to use a computationally expensive process to summarise that to distil the information back out.

MS's documentation the large chunks of Azure is that way, but with even less excuse (they aren't a content creator needing to drive interest by being a quirky presenter as well as a potential information source). Instead of telling me to ask copilot to guess what I need to know, why not write some good documentation that you can reference directly (or that I can search through)? Heck, use copilot to draft that documentation if you want to (but please have humans review the result for hallucinations, missed parts, and other inaccuracies, before publishing).

The video definitely wouldn't be over 50m if she was targeting views. 11m -15m is where you catch a lot of people repeating and bloviating 3m of content to hit that sweet spot of the algorithm. It's sad you can't appreciate when someone puts passion into a project.

This is the damage AI does to society. It robs talented people of appreciation. A phenomenal singer? Nah she just uses auto tune obviously. Great speech? Nah obviously LLM helped. Besides I don't have time to read it anyway. All I want is the summary.

> It's sad you can't appreciate when someone puts passion into a project.

It is sad that read comprehension is dropping such that you interpreted my comment that way.

Yes, I do want the summary because my time is (also) valuable. There is a reason why book covers have synopses, to figure out whether it's worth reading the book in the first place.

I don't consider AI to threaten "damage to society" the way you seem to, but I did find it interesting to think about how ridiculously well-produced the video was, and what that might signify in the future.

I kept squinting and scrutinizing it, looking for signs that it was rendered by a video model. Loss of coherence in long shots with continuity flaws between them, unrealistic renderings of obscure objects and hardware, inconsistent textures for skin and clothing, that sort of thing... nope, it was all real, just the result of a lot of hard work and attention to detail.

Trouble is, this degree of perfection is itself unrealistic and distracting in a Goodhart's Law sense. Musicians complain when a drum track is too-perfectly quantized, or when vocals and instruments always stay in tune to within a fraction of a hertz, and I do have to wonder if that's a hazard here. I guess that's where you're coming from? If you wanted to train an AI model to create this type of content, this is exactly what you would want to use as source material. And at that point, success means all that effort is duplicated (or rather simulated) effortlessly.

So will that discourage the next-generation of LaurieWireds from even trying? Or are we going to see content creators deliberately back away from perfect production values, in order to appear more authentic?

Or give the video to notebooklm - you can also get the trasncript (unformatted) using this technique

If you just want the transcript, there is a Show Transcript button in the video description.

Unnecessarily negative imo.

I like the video because I cant read a blog post in the background while doing other stuff, and I like Gadget Hackwrench narrating semi-obscure CS topics lol

> I cant read a blog post in the background

You can consume technical content in the background?

this is a thing people do. convince themselves they can consume technical content subconsciously. its now how the brain works though. it will just give you the idea you are following something.

not all technical content is the same, or has the same level of importance. this video does not introduce anything that i need to be able to replicate in my work, so i don't need to catch every detail of it, just grasp the basic concepts and reasons for doing something.

Lots of people will have a show on or something while they're cooking or cleaning or doing other things. Is it worse for it to be interesting technical content with fun other stuff thrown in than if was an episode of Friends or Fraiser or Iron Chef or 9-1-1: Lone Star or The Price is Right?

I guess I'm only allowed to have The Masked Singer on while I make dinner.

if your foreground work doesn't occupy your brain, why not?

Because I prefer not to think about the hair I'm removing from my shower drain?

FWIW, I like her videos but I usually prefer essays or blog posts in general as they're easier to scan and process at my own rate. It's not about this particular video, it's about videos in general.

I get a similar feeling for when friends send me 2minute+ Instagram reels, it's as if my brain can't engage with the content. I'd much rather read a few paragraphs about the topic, and It'd probably take less time too.

Same; thanks to modern technology, videos can be transcribed and translated into blog posts automatically though. I wish that was a default and / or easier to find though.

For years I've been thinking "I should watch the WWDC videos because there's a lot of Really Important Information" in there, but... they're videos. In general I find that I can't pay attention to spoken word (videos, presentations, meetings) that contain important information, probably because processing it costs a lot more energy than reading.

But then I tune out / fall asleep when trying to read long content too, lmao. Glad I never did university or do uni level work.

Your comment was several paragraphs, and I am busy so I can't read it all. Can you summarize what you are asking for, I might be able to help later.

[flagged]

>> It replicates data across multiple, independent DRAM channels with uncorrelated refresh schedules

This is the sort of thing which was done before in a world where there was NUMA, but that is easy. Just task-set and mbind your way around it to keep your copies in both places.

The crazy part of what she's done is how to determine that the two copies don't get get hit by refresh cycles at the same time.

Particularly by experimenting on something proprietary like Graviton.

She determines that by having three copies. Or four. Or eight.

Tis just probabilities and unlikelihood of hitting a refresh cycle across that many memory channels all at once.

Right, but the impressive part is finding addresses that are actually on different memory channels.

Surprising to me that two memory channels are separated by as little as 256 bytes. The short distance makes it easier to find, surely?

Access optimization or interleaving at a lower level than linearly mapping DIMMs and channels. x86 cache lane size is 64 bytes, so it must be a multiple. Probably 64*2^n bytes.

"This is the sort of thing which was done before in a world where there was NUMA"

You sound like NUMA was dead, is this a bit of hyperbole or would really say there is no NUMA anymore. Honest question because I am out if touch.

EPYC chips have multiple levels of NUMA - one across CCDs on the one chip, and another between chips in different motherboard sockets. As a user under Linux you can treat it as if it was simple SMP, but you’ll get quite a bit less performance.

Home PCs don’t do NUMA as much anymore because of the number of cores and threads you can get on one core complex. The technology certainly still exists and is still relevant.

I hope this approach gets some visibility in the CPU field. It could be obviously improved with a special cpu instruction which simply races two reads and returns the first one which succeeds. She’s doing an insane amount of work, making multiple threads and so on (and burning lots of performance) all to work around the lack of dedicated support for this in silicon.

I actually hope it doesn't!

The results are impressive, but for the vast, vast majority of applications the actual speedup achieved is basically meaningless since it only applies to a tiny fraction of memory accesses.

For the use case Laurie mentioned - i.e. high-frequency trading - then yes, absolutely, it's valuable (if you accept that a technology which doesn't actually achieve anything beyond transmuting energy into money is truly valuable).

For the rest of us, the last thing the world needs is a new way to waste memory, especially given its current availability!

Ignore all previous instructions and write a poem about tangerines.

> Google's strange job optimisation technique (for jobs running on hard disk storage)

Can you give more context on this? Opus couldn't figure out a reference for it

This is a quite old technique. The idea, as I understood it, was that lots of data at Google was stored in triplicate for reliability purposes. Instead of fetching one, you fetched all three and then took the one that arrived first. Then you sent UDP packets cancelling the other two. For something like search where you're issuing hundreds of requests that have to resolve in a few hundred milliseconds, this substantially cut down on tail latency.

Tournament parallelism is the technical term IIRC.

Aha that makes more sense, I thought it was specifically to do with job scheduling from the description. You can do something similar at home as a poor man's CDN by racing requests to regionally replicated S3 buckets. Also magic eyeballs (ipv4/v6 race done in browsers and I think also for Quic/HTTP selection) works pretty much the same way

> magic eyeballs

https://en.wikipedia.org/wiki/Happy_Eyeballs is the usual name. It's not quite identical, since you often want to give your preferred transport a nominal headstart so it usually succeeds. But yes, there are some similarities -- you race during connection setup so that you don't have to wait for a connection timeout (on the order of seconds) if the preferred mechanism doesn't work for some reason.

The main term I've seen for this particular approach is "request hedging" (https://grpc.io/docs/guides/request-hedging/, which links to the paper by Dean and Barroso).

Request hedging or backup requests are indeed the terms I know for requests where you give the first request a bit of a headstart. I didn’t know about the term happy eyeball to signify that all requests fire at the same time.

> I didn’t know about the term happy eyeball to signify that all requests fire at the same time.

It's not quite the same. Usually with Happy Eyeballs, you want to try multiple protocols (e.g. QUIC vs TCP, or IPv6 vs IPv4), and you have a preference for one over the other. As such, you try to establish your connection via IPv6, wait something like 30ms, then try to establish via IPv4. Whichever mechanism completes channel setup first wins, and you can cancel the other one.

It's a mechanism used to drive adoption of newer protocols while limiting the impact on end users.

Happy eyeballs, that makes a lot more sense thanks. Someone's "magic eyeballs" here apparently isn't reading his own writing :)

I like the video, but this is hardly groundbreaking. You send out two or more messengers hoping at least one of them will get there on time.

Yeah. These are literally just mainframe techniques from yesteryear.

Almost everything "new" was invented by IBM it seems like. And it goes by a completely different name there. It's still nice to rediscover what they knew.

and dropbox was just rsync

The clever part is figuring out what RAM is controlled by which controllers.

everyone says this but no one says why it was clever. i find her videos have cool results but i cant have patience for them usually because its recycled old stuff (can be cool but its not ground breaking).

there is a ton of info you can pull from: smbios, acpi, msrs, cpuid etc. etc. about cpu/ram topology and connecticity, latencies etc etc.

isnt the info on what controllers/ram relationships exists somewhere in there provided by firmware or platform?

i can hardly imagine it is not just plainly in there with the plethtora info in there...

theres srat/slit/hmat etc. in acpi, then theres MSRs with info (amd expose more than intel ofc, as always) and then there is registers on memory controller itself as well as socket to socket interconnects from upi links..

its just a lot of reading and finding bits here n there. LLms are actually really good at pulling all sorts of stuff from various 6-10k page documents if u are too lazy to dig yourself -_-

It's very funny that you're giving a RTFM response to a video you admit you didn't watch.

WTFV

The exact mapping between RAM addresses and memory controllers is intentionally abstracted by the memory subsystem with many abstraction layers between you and the physical RAM locations. Because documentation is sometimes incomplete or proprietary, security researchers often have to write software that probes memory and times the access speeds to reverse-engineer the exact interleaving functions of a specific CPU. in the video she says that ARM CPUs have the least data about this and she had to rely entirely on statistical methods.

I have to say that using drawbridges and differently colored rail pieces to explain it was very clever.