The claim that Google secretly wants YouTube downloaders to work doesn't hold up. Their focus is on delivering videos across a vast range of devices without breaking playback(and even that is blurring[0]), not enabling downloads.

If you dive into the yt-dlp source code, you see the insane complexity of calculations needed to download a video. There is code to handle nsig checks, internal YouTube API quirks, and constant obfuscation that makes it a nightmare(and the maintainers heroes) to keep up. Google frequently rejects download attempts, blocks certain devices or access methods, and breaks techniques that yt-dlp relies on.

Half the battle is working around attempts by Google to make ads unblockable, and the other half is working around their attempts to shut down downloaders. The idea of a "gray market ecosystem" they tacitly approve ignores how aggressively they tweak their systems to make downloading as unreliable as possible. If Google wanted downloaders to thrive, they wouldn't make developers jump through these hoops. Just look at the yt-dlp issue tracker overflowing with reports of broken functionality. There are no secret nods, handshakes, or other winks, as Google begins to care less and less about compatibility, the doors will close. For example, there is already a secret header used for authenticating that you are using the Google version of Chrome browser [1] [2] that will probably be expanded.

[0] Ask HN: Does anyone else notice YouTube causing 100% CPU usage and stattering? https://news.ycombinator.com/item?id=45301499

[1] Chrome's hidden X-Browser-Validation header reverse engineered https://news.ycombinator.com/item?id=44527739

[2] https://github.com/dsekz/chrome-x-browser-validation-header

> If you dive into the yt-dlp source code, you see the insane complexity of calculations needed to download a video. There is code to handle nsig checks, internal YouTube API quirks, and constant obfuscation that makes it a nightmare(and the maintainers heroes) to keep up. Google frequently rejects download attempts, blocks certain devices or access methods, and breaks techniques that yt-dlp relies on.

This just made me incredibly grateful for the people who do this kind of work. I have no idea who writes all the uBlock Origin filters either, but blessed be the angels, long may their stay in heaven be.

I'm pretty confident I could figure it out eventually but let's be honest, the chance that I'd ever actually invest that much time and energy is approximates zero close enough that we can just say it's flat nil.

Maybe Santa Claus needs to make some donations tonight. ho ho ho

As the web devolves further, the only viable long-term solution will be allow lists instead of block lists. There is too much hostility online—from websites that want to track you and monetize your data and attention, SEO scams and generated content, and an ever-increasing army of bots—that it's becoming infeasible to maintain rules to filter all of it out. It's much easier to write rules for traffic you approve of, although they will have to be more personal than block lists.

This is more or less what I already do with uBlock/uMatrix. By default, I filter out ALL third party content on every website, and manually allow CDNs and other legitimate third party domains. I still use DNS blacklists however so that mobile devices where this can't be easily done benefit from some protection against the most common offenders (Google Analytics, Facebook Pixel, etc.)

I’m not sure why everyone keeps repeating this. The fight is lost. Your data is being collected by the websites you visit and handed to Facebook via a proxy container. You will never see a different domain, it’s invisible to the end user.

Care to elaborate on the mechanisms at play? If what you claim is true, all websites would already serve ads from their own domain. The main issue I can see with this approach is that there would be an obvious incentive for webmasters to vastly overstate ad impressions to generate revenue.

Look up Facebook Conversions API Gateway

As far as I understand, the objective is completely different. Ads are shown on platforms owned by Meta, and the Conversions API runs on the merchant's website (server-side), and reports interactions such as purchases back to Facebook.

This is quite different from websites monetizing traffic through and trackers placed on their own webpages. Those can still be reliably blocked by preventing websites from loading third party content.

[dead]

I also don't buy this argument about YouTube depending on downloaders:

> They perform a valuable role: If it were impossible to download YouTube videos, many organizations would abandon hosting their videos on YouTube for a platform that offered more user flexibility. Or they’d need to host a separate download link and put it in their YouTube descriptions. But organizations don’t need to jump through hoops -- they just let people use YouTube downloaders.

No, organizations simply use YouTube because it's free, extremely convenient, has been very stable enough over the past couple decades to depend on, and the organization does not have the resources to setup an alternative.

Also, I'm guessing such organizations represent a vanishly small segment of YouTube's uploaders.

I don't think people appreciate how much YouTube has created a market. "Youtuber" is a valid (if often derided) job these days, where creators can earn a living wage and maintain whole media companies. Preserving that monetization portal is key to YouTube and its content creators.

> and the organization does not have the resources to setup an alternative.

Can confirm at least one tech news website argued this point and tore down their own video hosting servers in favor of using Youtube links/embeds. Old videos on tweakers.net are simply not accessible anymore, that content is gone now

This was well after HTML5 was widely supported. As a website owner myself, I don't understand what's so hard now that we can write 1 HTML tag and have an embedded video on the page. They made it sound like they need to employ an expensive developer to continuously work on improving this and fixing bugs whereas from my POV you're pretty much there with running ffmpeg at a few quality settings upon uploading (there are maybe 3 articles with a video per day, so any old server can handle this) and having a quality selector below the video. Can't imagine what about this would have changed in the past decade in a way that requires extra development work. At most you re-evaluate every 5 years which quality levels ffmpeg should generate and change an integer in a config file...

Alas, little as I understand it, this tiny amount of extra effort, even when the development and setup work is already in the past(!), is apparently indeed a driving force in centralizing to Youtube for for-profits

> As a website owner myself, I don't understand what's so hard now that we can write 1 HTML tag and have an embedded video on the page.

You acknowledge that it's not that simple:

> running ffmpeg at a few quality settings upon uploading (there are maybe 3 articles with a video per day, so any old server can handle this)

Can any old server really handle that? And can it handle the resulting storage of not only the highest-quality copy but also all the other copies added on top? My $5 Linode ("any old server") does not have the storage space for that. You can switch your argument to "storage is cheap these days," but now you're telling people to upgrade their servers and not actually claiming it's a one-click process anymore.

I use Vimeo as a CDN and pay $240 per year for it ($20/month, 4x more than I spend on the Linode that hosts a dozen different websites). If Vimeo were to shut down tomorrow, I'd be pretty out of luck finding anyone offering pricing even close to that-- for example, ScaleEngine charges a minimum of $25 per month and doesn't even include storage and bandwidth in their account fee. Dailymotion Pro offers a similar service to Vimeo these days, but their $9/month plan wouldn't have enough storage for my catalog, and their next cheapest price is $84/month. If you actually go to build out your own solution with professional hosting, it's not gonna be a whole lot cheaper.

Obviously, large corporations can probably afford to do their own hosting-- and if push came to shove, many of them probably would, or would find one of those more expensive partner options. But again, you're no longer arguing "it's just an HTML tag." You're now arguing they should spend hundreds or thousands per year on something that may be incidental to their business.

Here's me hosting a bunch of different bitrates of a high quality video, which I encoded on a 2016 laptop. http://lelandbatey.com/projects/REDLINE-intro/

The server is $30/month hosted by OVH, which comes with 2TB of storage. The throughout on the dedicated server is 1gbps. Unlimited transfer is included (and I've gone through many dozens of TB of traffic in a month).

People paying for managed services have no concept of bandwidth costs, so they probably think what you just described is impossible.

Bandwidth these days can be less than .25/m at a 100g commit in US/EU, and OVH is pushing dozens of tb/s.

Big ups on keeping independent.

No lol nobody is reading the numbers. Vimeo is $20 / mo. Vimeo + $5 Linode server = $25 / mo, cheaper than the $30 / mo OVH server. The quoted ScaleEngine is $25 / mo, which ($25 + $5 = $30) the same as the OVH server.

Y'all just have two different budgets. For one person $30 / mo is reasonable for the other it's expensive.

But the core claim, that $5 / mo hosts a lot of non-video content but not much video content, holds.

You misread the bandwidth cost part of my comment.

A $28/mo (Australian) vimeo subscription, or the "Advanced" $91/mo plan include the same 2TB bandwidth/month for viewers of your videos.

If you upload a 100MB video and it gets 20000 views the whole way through, you are now in the "contact sales" category of pricing.

This is why Youtube has a monopoly, because you've been badly tricked into thinking this pricing is fair and that 2TB is in any way shape or form adequate.

Tbh, the $5 claim was in response to me but I never said any VPS would have the storage capacity to host a catalogue. I said any server. Call it my self-hoster's bias but I really did picture a hardware server with a hard drive in it, not a virtual access tier with artificial limits

But yeah okay, not any server variant can do this and the cloud premium is real. You'd need so spend like 5€/month on real hard drives if you want, say, 4TB of video storage on top of your existing website (the vimeo and dailymotion price points mentioned suggest that their catalogue is above 1 but below 2 TB). The 5€/month estimate is based on the currently most-viewed 4TB hard drive model in the tweakers pricewatch (some 100€), a modest average longevity of 5 years, triple redundancy, and that you would otherwise be running on smaller drives anyway for your normal website and images so there's no (significant) additional labor or electricity costs for storage (as in, you just buy a different variant of the same setup, not need to install additional ones)

~~Likely much less than .25/m if that’s mbps. The issue is you’d have no shortage of money at that scale - I run one of the two main Arch Linux package mirrors in my country and while it’s admittedly a quite niche and small distro in comparison, I’m nowhere close enough to saturate 1gbit on normal days, let alone my 10gbit link~~

It’s a trade off I suppose - you can very well host your own streaming solution, and for the same price you can get a great single node, but if you want good TTFB and nodes with close proximity to many regions you may as well pay for a managed solution as the price for multiple VPS/VM stacks up quickly when you have a low budget

Edit: I think I missed your point about bandwidth pricing lol, but the second still stands

Yeah, currently hosting LLHLS edge nodes in US + EU and caching CDN worldwide. The base cost grows if you have an audience of e.g. 2000 live viewers for a 2mbps stream = 4gbps.

Could be a lot cheaper and less need for global distribution if low latency weren't a requirement. And by low latency I mean the video stream you watch is ~2s behind reality just like Youtube, Twitch, Kick, etc. If your use case is VOD or can tolerate 10s latency streaming, single PoP works fine.

The point is that if I chose Vimeo or AWS/GCP/Azure for this, at their bandwidth pricing my (in my opinion modest) usage would be costing tens of thousands of dollars monthly, which it most certainly does not.

Managed service pricing and the perception of it needs to change, because it feels like a sham once you actually do anything yourself.

I'm on mobile, but what player did you use on your website?

Does it handle buffer?

Fwiw, the browser's built-in player does buffering. You don't need to custom-code that, you can just use <video>. The browser also exposes via Javascript when it estimates that the download speed and buffer size is sufficient such that you can start playback without interruption: https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaEl...

Not the person above but they're using Video.js 7.10.2 <http://videojs.com/>

Doesn't cloudflare and amazon have this now? Pretty sure CF is developing a closed source player- but theres plenty of FOSS ones (rip the one jellyfin uses out of it- at worst).

And, theres plenty of tutorials on using ffmpeg based tools to make the files. And yes, "oh no, I need to learn something new for my video workflow."

[deleted]

Just use CloudFlare R2 object storage with free bandwidth. It's specifically cleared for use as a video hoster.

Have you tried cloudflare r2?

It’s easy to set up a backend video hosting system. But unless you are running (and checking!) a strong client-side observability system, you’ll never see all the problems that people are having with it. And they won’t tell you either, they’ll just leave.

Reddit struggles to provide a video player that is up to YouTube’s par. Do you have more resources than Reddit? Better programmers?

Is there someone in the world for whom this demo https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... does not play? Because that's what I use and am not aware of issues with it

> Reddit struggles to provide a video player that is up to YouTube’s par. Do you have more resources than Reddit? Better programmers?

It's hard to say whether MDN and I have/am better programmer(s) and resource(s) than reddit without having any examples or notions of what issues reddit has run into.

If you mean achieving feature-parity, like automatic subtitles and dubbing and control menus to select languages and format those subtitles etc., that's a whole different ball game of course. The site I was speaking of doesn't support that today either (they don't dub/sub), at best you get automatically generated Dutch subtitles from yt now, i.e. shit subtitles (worse than English autogen and we all know how well those deal with jargon and noise)

You're linking to a page with a 5 second 1MB video on it. Yes, it's easy to use the <video> element to serve a video file no larger than a picture. No, that does not mean you have a system that will allow thousands of users to watch an 11 min HD video during their subway ride that starts instantly and never pauses, stutters, or locks up.

I can't speak to Dutch websites but in the U.S., a news website will usually feel obligated to provide subtitles on their videos to avoid lawsuits under the ADA.

Oh that's interesting! The US is portrayed here as this free for all country (limited unemployment money, health services, PTO...) but then subtitles are mandatory? That's cool! I presume we don't have such a law since the news sites I frequent don't seem to offer that for most videos (not counting youtube's autogenerated attempt for the few sites that outsource video hosting to google)

As for that video being small and not receiving thousands of simultaneous views: sure, but buying sufficient bandwidth is not a "hire better programmers" problem. You don't need to beat Reddit's skills and resources to provide smoother video playback. Probably the opposite actually: smaller scale should be easier to handle than reddit scale, and they already had that all set up

It is actually pretty easy to provide video. It's hard to provide video to a lot of people.

Reddit and Youtube have just a massive number of people visiting and trying to watch video at all time. It requires an enormous amount of bandwidth to serve up that video.

Youtube goes through heroic efforts to make videos instantly available and to apply high quality compression on videos that become popular.

If you don't have a huge viewership or dynamic content then yeah, it's actually pretty easy to setup and run videos sites (infowars has managed it). Target h264 and aac audio with a set number of resolutions and bitrates and viola, you've got something that's pretty competitive on the cheap that can play on pretty much any device.

It's not optimal for bandwidth, for that you need to start sniffing client capabilities. However, it'll get the job done while being pretty much universally playable.

> apply high quality compression on videos that become popular

Do they put a different amount of compression effort in if the video isn't (expected to become) popular?

I don't know what the Youtube compression queue looks like.

I'd not be shocked if they do more aggressive compression for well known creators.

For nobodies (like myself) the compression is very basic. Whatever I send ends up compressed with VP9 which, I believe, youtube has a bunch of hardware that can do that really fast.

Thing with Infowars is, they got a lot of rich people and probably Russia paying the bills. Video hosting still is damn expensive if you are not one of the top dogs.

Yeah organizations don't use YouTube for file access, that's just not a good way to operate a video department in a business. Also the quality is terrible and adding another set of reencodes will make it even worse.

I've seen public schools and public courts use YouTube to host their videos. I don't quite follow why they would care so much about either using downloaders themselves or their users having access to downloaders that they'd switch providers. To me it is unlikely but plausible.

But even if they did - I don't see why Google would care about these organizations. I expect anyone doing this is not expecting to get any views from the algorithm, is not bringing in many views or monetizing their videos.

To steelman it though, maybe Google cares if their monopoly benefits from nobody even knowing other video sites exist.

I never understood why do they not limit downloading data to the speed at which you could be possibly watching it. Yesterday I downloaded a 15hour show in like 20 minutes. There is no way I could have downloaded that much data in a legit way through their website/player

Im glad I wasn't blocked or throttled, but it seems like it'd be trivial to block someone like me

Am I missing something? It does sort of feel like they're allowing it

EDIT: Spooky skeletons.. Youtube suddenly as of today forces a "Sign in to confirm you’re not a bot" on both the website and yt-dl .. So maybe I've been fingerprinted and blacklisted somehow

You could have been a legit viewer... clicking to skip over segments of the video, presumably trying to find where you left off last time, or for some scene you remember, or the climax of the video... whatever.

Youtube does try to throttle the data speeds, when that first happened, youtube-dl stopped being useful and everyone upgraded their python versions and started using yt-dlp instead.

If you click to skip over, even clicking every minute, you're still not grabbing the whole thing, right? Whereas downloading is grabbing every second.

Depending on the player and how they cache it. Yes, if google monitor every byte to which client had downloaded, but that just seems like ultra micro managing, and have no idea how many players will it break. Youtube seems like one of those site, should allow people to download or make them a public utility on IPFS or something like that.

You are actually. Watch one second, the player buffers the next minute of video, then you skip ahead 1 minute. Process repeats.

They still want the YouTube experience to be smooth, to allow users to skip small parts of videos without waiting for it to load every time, to be able to watch multiple videos at the same time, to be able to leave video paused until it loads, etc., which limiting downloading data would hinder. I assume blocking downloads is just not worth destroyinf user experience.

Also they allow downloads for premium subs maybe it’s more efficient to not check that status every time.

There is an official download option inside the app. If they limit the download speed to the watching time, it won't be useful.

The official download option doesn't download it to your filesystem as a file. It just lets you watch the video offline in the official app/website. Just tested it now.

Meaning the video file exists in your file system somewhere, so downloading at a higher speed than possibly viewing the video is an existing functionality in the app.

If you're on iOS no way to access it, and I think on Android it's in protected storage as well.

This option is only available to premium users afaik

I think they are, yt-dlp just circumvents it

As some people already said, skipping section. Also you can increase video speeds, I normally watch youtube at 2x speed but I think you can go up to 5x.

Written by somebody who hasn't taken 1 look at yt-dlp source code or issues. Google regularly pushes updates that "coincidentally" break downloaders. The obfuscation and things they do to e.g. break a download by introducing some breaking code or dynamic calculation required only part way through the video is not normal. They are not serving a bunch of video files or chunks, you need a "client" that handles all these things to download a video. At this point, if you assert that Google doesn't want to secretly stop it, you are either extremely naive, ignorant, or a Google employee.

I think GP is agreeing with you

I have YT Premium and if Google bans yt-dlp, I will cancel my subscription. I pay them not to do that.

you show them who’s boss, premium user

Seems quite naive to think they'd be affected in any way by the tiny intersection of users that are both yt-dlp users and premium subscribers boycotting them...

I think it is not about making a change, it is putting money where your mouth is.

To buy premium to support creators.

Once yt becomes hostile the deal between me and yt is off.

If every user of yt-dlp did as I do, then it would have exactly the effect that it needs to have. If yt-dlp is used by a small minority of users, why would Google be antagonistic to it? And if it's used by sizeable portion of users, then they would care.

> If yt-dlp is used by a small minority of users, why would Google be antagonistic to it?

The concern is likely that if they let it become too easy the small minority becomes a large majority and the ad business becomes unsustainable.

Consider the ease and adoption of newsgroups in the early 90s vs Napster/Limewire later and the effect that had on revenues.

Primarily because they contractually promised the music industry they'd do everything they can to prevent tools that allow the downloading of copyrighted music from the service.

Why don’t creators both publish to YouTube but also publish somewhere else for archival or public access reasons, to help keep content available for outside walled gardens? Is it just not important to them? Is it hosting costs? Missing out on ad revenue?

Where else should they be publishing to? And who is going to pay for this service?

Don’t forget - most “content creators” are not technical - self hosting is not an option.

And even if it were - it costs money.

I just mean some kind of public service like one of those archive sites. So they would place it into YouTube for revenue but also these other places so there’s a way to get the videos without Google being a dictatorial overlord.

For a lot of creators, YouTube is the internet

There's no incentive for them to do so. It reduces their ad revenue, while costing more money to host it. That said, if you are a creator and you do want to do it, Peertube is a good option because it uses torrent technology to reduce your hosting costs.

Youtube pays them per (ad) view, and also recommends the video to more people based on how many people click on it. So giving people another way to watch it will decrease their revenue and audience.

LTT kinda do, but they're the exception, not the norm

The argument the article is making is that if they really wanted YouTube downloaders to stop working, they'd switch to Encrypted Media Extensions. Do you think that's not plausible?

Many smart devices that have youtube functionality(tvs, refrigerators, consoles, cable boxes, etc), have limited or no ability to support that functionality in hardware, or even if they do, it might not be exposed.

Once those devices get phased out, it is very likely they will move to Encrypted Media Extensions or something similar, I believe I saw an issue ticket on yt-dlp's repo indicating they are already experimenting with such, as certain formats are DRM protected. Lookup all the stuff going on with SABR which if I remember right is either related to DRM or what they may use to support DRM.

Here has to be at least some benefit Google thinks it gets from youtube downloaders, because for instance there have been various lawsuits going after companies that provide a website to do youtube downloading by the RIAA and co, but Google has studiously avoided endorsing their legal arguments.

for example I think feature length films that YouTube sells (or rents) already use this encryption.

That’s why authors should pony up and pay for the encryption feature and rest should be free to download. YouTube could embed ads this way too.

That's a wildly imaginative fever dream you're having. There is no timeline in which content creators would pay YouTube to encrypt their video content.

Here's a thought: what if paying a fixed amount to encrypt your video would grant you a much higher commission for the ads shown?

Anything that's had an official YouTube app for the past nine years does, because it's been a hard requirement for a YouTube port that long.

It's much more likely YouTube just doesn't want to mess with it's caching apparatus unless it really has to. And I think they will eventually, but the numbers just don't quite add up yet.

Using DRM would make it illegal for YouTubers to use Creative-Commons-licensed content in their videos, such as Kevin MacLeod's music or many images from Wikipedia.

When you upload a video to YouTube, you agree that you own the copyright or are otherwise able to grant YouTube a license to do whatever they want with it [0]:

> If you choose to upload Content, you must not submit to the Service any Content that does not comply with this Agreement (including the YouTube Community Guidelines) or the law. For example, the Content you submit must not include third-party intellectual property (such as copyrighted material) unless you have permission from that party or are otherwise legally entitled to do so. [...]

> By providing Content to the Service, you grant to YouTube a worldwide, non-exclusive, royalty-free, sublicensable and transferable license to use that Content (including to reproduce, distribute, prepare derivative works, display and perform it) in connection with the Service and YouTube's (and its successors' and Affiliates') business, including for the purpose of promoting and redistributing part or all of the Service.

If you include others' work with anything stronger than CC0, that's not a license you can grant. So you'll always be in trouble in principle, regardless of whether or how YouTube decides to exercise that license. In practice, I wouldn't be surprised if the copyright owner could get away with a takedown if they wanted to.

[0] https://www.youtube.com/t/terms#27dc3bf5d9

Yes, this absolutely does not shield YouTube from liability from third parties, since the copyright holder of third-party content included in the video is not a party to the agreement. That's why they have a copyright notice and takedown procedure in the first place, and also the reason for numerous lawsuits against YouTube in the past, some of which they have lost.

To date, many Creative Commons licenses do in fact amount to "permission from that party", but if they start using DRM, those licenses would cease to grant YouTube permission.

No it wouldn't.

You may not be very familiar with Creative Commons licensing. For example, CC BY-SA 4.0 would prohibit YouTube from using DRM:

> No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.

(https://creativecommons.org/licenses/by-sa/4.0/legalcode.en)

Most of the CC licenses include such language and have since the first versions.

> if they really wanted YouTube downloaders to stop working

Wrong question leads to the wrong answer.

The right one is "how much of the ad revenue would be lost if". For now it's cheaper to spend bazillions on a whack-a-mole.

I miss the system where, when I was watching a flash video in Firefox, that video was already present on my hard drive as an .flv file in /tmp, and I could just copy it somewhere.

I remember that was the case indeed.

To be fair, the article doesn't say Google "secretly wants" downloaders to work. It says they need downloaders to work, despite wanting to make them as annoying as possible to use. The argument isn't so much about Google's feelings as it is about whether the entire internet would continue making YouTube the video hosting site to use if downloaders were actually (effectively) blocked.

I don’t think companies are asking “can people download this video” but rather “can people watch this video” - downloaders seems like an afterthought or non issue.

You conveniently side-stepped the argument that YouTube already knows how to serve DRM-ized videos, and it's widely deployed in its Movies & TV offering, available on the web and other clients. They chose not to escalate on all videos, probably for multiple reasons. It's credible that one reason could be that it wants the downloaders to keep working; they wouldn't want those to suddenly gain the ability to download DRM-ized videos (software that does this exist but it's not as well maintained and circulated).

It seems more credible to me that they would cut off a sizable portion of their viewers by forcing widevine DRM.

Or is it something different you are thinking about?

What benefits does DRM even provide for public, ad-supported content that you don't need to log for in order to watch it?

Does DRM cryptography offer solutions against ad blocking, or downloading videos you have legitimate access to?

Sorry that I'm too lazy to research this, but I'd appreciate if you elaborate more on this.

And also, I think they're playing the long game and will be fine to put up a login wall and aggressively block scraping and also force ID. Like Instagram.

Would be glad if I'm wrong, but I don't think so. They just haven't reached a sufficient level of monopolization for this and at the same time, the number of people watching YouTube without at least being logged in is probably already dwindling.

So they're not waiting anymore to be profitable, they already are, through ads and data collection.

But they have plenty of headroom left to truly start boiling the frog, and become a closed platform.

While I do agree (mostly, I've never had a download NOT work, on the rare occasion I grab one), they haven't made it impossible to download videos, so that is a win IMO.

Your view from a distance, where you rarely download Youtube videos, is common for now, and we still live in a very fortunate time. The problems are short lived, so over long periods, they tend to average out, and you are unlikely to notice them. Even active users will rarely notice a problem, so it is understandable for your use case, it would seem perfect.

Looking closely, at least for yt-dlp, you would see it tries multiple methods to grab available formats, tabulates the working ones, and picks from them. Those methods are constantly being peeled away, though some are occasionally added or fixed. The net trend is clear. The ability to download is eroding. There have been moments when you might seriously consider that downloading, at least without a complicated setup(PO-Tokens, widevine keys, or something else), is just going to stop working.

As time goes on, even for those rare times you want to grab a video, direct downloading may no longer work. You might have to resort to other methods, like screen recording through software or an actual camera, for as long as your devices will let you do even that.

Right!

I very rarely download YouTube videos but simply having done it a few times over the years, and even watching the text fly by in the terminal with yt-dlp, everything you’ve said is obvious.

Screen recording indeed might fail—Apple lets devs block it, so even screen recording the iPhone Mirroring app can result in an all-black recording.

How long until YouTube only plays on authorized devices with screens optimized for anti-camera recording? Silver lining, could birth a new creative industry of storytelling, like courtroom sketch artists with more Mr. Beast.

> (mostly, I've never had a download NOT work, on the rare occasion I grab one)

A lot of the reason for that is because yt-dlp explicitly makes it easy for you to update it, so I would imagine that many frontends will do so automatically - something which is becoming more necessary as time goes on, as YouTube and yt-dip play cat and mouse with each other.

Unfortunately, lately, yt-dip has had to disable by default the downloading of certain formats that it was formerly able to access by pretending to be the YouTube iOS client, because they were erroring too often. There are alternatives, of course, but those ones were pretty good.

A lot of what you see in yt-dlp is because of the immense amount of work that the developers put in in order to keep it working. Despite that it now allows for downloading from many more sites than it originally was developed for, they're still not going to give up YouTube support (as long as it still allows DRM-free versions) without a fight.

Once YouTube moves to completely DRM'd videos, however, that may have to be when yt-dlp retires support for YouTube, because yt-dlp very deliberately does not bypass DRM. I'd imagine the name would change at that point.

> mostly, I've never had a download NOT work

Well, how about thanks the people who's maintaining the downloader to make it possible?

> they haven't made it impossible to download videos, so that is a win IMO.

At some point you can just fire up OBS Studio and do a screen rip, then cut the ads out manually and put it on Torrent/ED2k.

Will you still think it's a win then?

> there is already a secret header used for authenticating that you are using the Google version of Chrome browser

Google needs to be broken up already.

Especially given that YT frequently blocks yt-dlp and bans users who workaround by using the --cookie flag

Google is not a side here if you don't want people to download your video do not put it on the internet.

This is starting to look like some quality llm benchmark!

And ever updating with that!

"If you dive into the yt-dlp source code, you see the insane complexity of calculations needed to download a video. "

Indeed the complexity is insane

https://news.ycombinator.com/item?id=45256043

But what is meant by "a video". Is this referring to the common case or an edge/corner case. Does "a" mean one particular video or all videos

"There is code to handle nsig checks, internal YouTube API quirks, and constant obfuscation that makes it a nightmare(and the maintainers heroes) to keep up."

True, but is this code required for all YouTube videos

The majority of YT videos are non-commercial, unpromoted with low view counts. These are simple to download

For example, the current yt-dlp project contains approximately 218 YT IDs. A 2024 version contained approximately 201 YT IDs. These are often for testing edge cases

The example 1,525-character shell script below outputs download URLs for almost all the YT IDs found in yt-dlp. No Python needed

By comparison the yt-dlp project is 15,679,182 characters, approximately

The curl binary is used in the example only because it's popular, not because I use it. I use simpler, more flexible software than curl

I have been using tiny shell script to download YT videos for over 15 years. I have been downloading videos from googlevideo.com for even longer, before Google acquired YouTube.^1 Surprisingly (or not), when YT changes something that requires updating the script (and this has only happened to me about 5 times or less in 15 years) I have generally been able to fix the shell script faster than yt-dl(p) fixes its Python program (same for NewPipe/NewPipeSB)

I prefer non-commercial videos that are not promoted. The ones with relatively low view counts. For more popular videos, I listen to the audio file first before downloading the video file. After listening to the audio, I may decide to skip the video. Also I am not overly concerned about throttling

1. The original Google Video made a distinction between commercial and non-commercial(free) videos. The later were always easy to download, and no sign-in/log-in was required. This might be a more plausible theory why YT has always allowed downloads for non-commercial videos

   # custom C filters to make scripts faster, easier to write
   # yy030 filters URLs from stdin
   # yy082 filters various strings from stdin, 
   # e.g., f == print format descriptions, v == print YT IDs
   # x is a YouTube ID
   # script accepts YT ID on stdin
   
   #/bin/sh
   read x;
   y=https://www.youtube.com/youtubei/v1/player?prettyPrint=false 
   curl -K/dev/stdin $y <<eof|yy030|if test $# -gt 0;then egrep itag=$1;else yy082 f|uniq;fi;
   silent
   #verbose
   ipv4
   http1.0
   tlsv1.3
   tcp-nodelay
   resolve www.youtube.com:443:142.251.215.238 
   user-agent "com.google.ios.youtube/19.45.4 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)"
   header "content-type: application/json"
   header "X-Youtube-Client-Name: 5"
   header "X-Youtube-Client-Version: 19.45.4"
   header "X-Goog-Visitor-Id: CgtpN1NtNlFnajBsRSjy1bjGBjIKCgJVUxIEGgAgIw=="
   cookie "PREF=hl=en&tz=UTC; SOCS=CAI; GPS=1; YSC=4sueFctSML0; __Secure-ROLLOUT_TOKEN=CJO64Zqggdaw7gEQiZW-9r3mjwMYiZW-9r3mjwM%=; VISITOR_INFO1_LIVE=i7Sm6Qgj0lE; VISITOR_PRIVACY_METADATA=CgJVUxIEGgAgIw=="
   data "{\"context\": {\"client\": {\"clientName\": \"IOS\", \"clientVersion\": \"19.45.4\", \"deviceMake\": \"Apple\", \"deviceModel\": \"iPhone16,2\", \"userAgent\": \"com.google.ios.youtube/19.45.4 (iPhone16,2; U; CPU iOS 18_1_0 like Mac OS X;)\", \"osName\": \"iPhone\", \"osVersion\": \"18.1.0.22B83\", \"hl\": \"en\", \"timeZone\": \"UTC\", \"utcOffsetMinutes\": 0}}, \"videoId\": \"$x\", \"playbackContext\": {\"contentPlaybackContext\": {\"html5Preference\": \"HTML5_PREF_WANTS\", \"signatureTimestamp\": 20347}}, \"contentCheckOk\": true, \"racyCheckOk\": true}"
   eof

But I think the article's point isn't that Google wants downloaders to work, it's that they tolerate just enough friction to keep power users from revolting, without officially endorsing anything

I think this “power user” ship has sailed already. See Google locking down Android, for example. They’re an established monopoly at this point and if the last Chrome antitrust case shows us something is that they won’t be penalized for any of their anti-consumer actions.

Or they want to show the advertisers they do EVERYTHING.... while in reality they don't. Not that their ad system isn't total braindead.