Both have garbage content at this point - Coursera was great when they launched, top quality material and university-level instruction. Now it's just bottom of the barrel scraps.

YT has tons of quality instruction - hell nowadays I just ask an LLM to make me a course for whatever I wanna learn.

I tried that out in my field of expertise, to calibrate my expectations. ChatGPT invented multiple references to non-existent but plausibly-titled papers written by me.

I think of that when asking questions about areas I don’t know.

That was about 18mo ago, so maybe this kind of hallucination is under control these days.

LLMs are good for tasks where you can verify the result. And you must verify the result unless you're just using it for entertainment.

[dead]

I would use an agent (Codex) for this task: use the Pro model in ChatGPT for deep research and to assemble the information and citations, then have Codex systematically go through the citations with a task list to web search and verify or correct each. Codex can be used like a test suite.

Turns out Gell-Mann amnesia applies to LLMs too.

LLMann amnesia?

[deleted]

My biggest issue with Udemy courses is that it's not easy to vet the instructor. User ratings are unreliable since beginners aren't really in a position to evaluate a teacher's expertise.

If Udemy's pitch were “Learn X as Taught by Notable People in the Field,” I would have signed up in a heartbeat.

- 3D Graphics taught by Michael Abrash

- Card Manipulation taught by Jeff McBride

- Pianistic Ergonomics taught by Edna Golandsky

> If Udemy's pitch were “Learn X as Taught by Notable People in the Field,”

MasterClass already is like this, but the content doesn't go as deep as it could to really teach learners.

Masterclass is really a scam in my opinion. Who needs them to teach about generic stuff, when what we need is how to be good like them, that’s why we pay for the course

This. Intro to X courses are better left to the manifold resources available on YouTube. Experts are for expert topics. As Liszt reportedly said when asked why he eventually accepted only advanced pupils: “Wash your dirty linen at home.”

And you probably don't have the chops to write screenplays like Aaron Sorkin and almost certainly won't develop them from a video.

You're probably gonna need to chip a few blocks on your own before asking Michelangelo for pointers.

Pretty much. This idea that you can never have done something and get something from a Masterworks is a bit silly. Inside the Actors Studio was great but essentially entertainment. As something that's likely to give any real insight into things you're missing, you probably need to have some experience first.

Notable people tend to teach on their own sites, or at least more specialized sites rather than generic sites like Udemy. Udemy would need to pay them instead.

It's not hard to look at each profile, most will proudly shout their top credentials as visibly and often as possible. "1M Subscribers on YouTube!" vs. "I worked in this industry for 10 years" is a pretty easy call. How much of this process should be spoonfed? Active engagement is required at some point.

Udemy functions as open market with the associated pros and cons.

> nowadays I just ask an LLM to make me a course for whatever I wanna learn.

That is an excellent way to trick yourself into thinking that you learned, when really you got fed bad information. LLMs are nowhere near reliable enough to use for this topic and probably never will be.

Yeah. And the a lot of coursera courses offered by universities are dumbed down. I much prefer going to Youtube and watch open courses there.

I guess it depends on what you ask an LLM to teach you. For certain subjects, I've found them to be a pain in the ass to get right.

For instance, I was hoping that I could use GPT to help me learn to fly a B737-800. This is actually less challenging than people think... if you just want to get in the air and skip all proper procedure and safety checks! If you want to fly a commercial plane like a real pilot, there is a ton of procedure and instruments to understand. There is actually quite a bit of material on this available online via flight crew operations manuals, as well as an old (but still relevant) manual straight from Boeing. So why rely on GPT? It's a bit hard to explain without rambling, but those manuals are designed for pilots with a lot of prior knowledge, not some goofball with X-Plane and a joystick. It would be nice to distill that information down for someone who just wants an idiot's guide to preflight procedure, setting the flight computer, taxiing, taking off, and performing an ILS landing.

Sadly, it turned out I really had to hold the LLM's hand along the way, even when I provided it two PDFs of everything it needed to know, because it would skip many steps and get them out of order, or not be able to correctly specify where a particular instrument or switch was located. It was almost a waste of time, and I actually still have more to do because it's that inefficient.

That said, I still think LLMs can be unreasonably good for learning about very specific subjects so long as you don't blindly believe it. I kinda hate how I have to say that, but I see people all the time believing anything Grok says. :facepalm: GPT has been a big help in learning things about finance, chemistry, and electronics. Not sure I would assume it could create a full blown course, but who knows. I bet it'd be pretty solid at coming up with exam questions.

Considering hallucinations, that seems risky. How do you double check what you were taught?

Don't ask them to teach you, ask them to make a self-study syllabus/roadmap with online references. It's likely that it ingested the work of others in exactly this scenario, so it shouldn't confabulate as easily.

The same way you double check with any other method you prefer? Duh.

LLMs are vastly superior to compile and spread knowledge than any other thing preceding them.

You double check every university lecture you've been apart of?

Did you just sit there in class and then never do anything with what you learned afterwards? That certainly isn't how I approached university.

Doing something with the knowledge given in a lecture is very distinct from fact checking it

I'd say it's a subset of fact checking it. You can check facts without doing anything else, but doing something with the knowledge is inherently checking it. If the lecture presents some programming technique, and I implement it, I'll find out pretty quickly if it's wrong.

That's what is called "studying" or "reading a textbook", isn't it?

Uhm no? Reading a textbook is obviously not the same as fact checking a textbook.

Parent was writing about a university LECTURE which is different from a TEXTBOOK (which is different from primary sources), so yeah, consulting other sources is checking the facts.

Oh I see what you're saying. It was slightly ambiguous.

But in any case, I didn't read a single textbook at uni; it was all lecture notes provided by the lecturers (fill-in-the-gaps actually which worked waaaay better than you'd think). So the answer is still no - I didn't fact check them and I didn't need to because they didn't wildly hallucinate like AI does.

[deleted]

The real answer is:

You should have a mental model about how the world works and the fundamental rules of the context where you're operating. Even though you might not know something, you eventually develop an intuition of what makes sense and what doesn't. And yes, that applies even to "university lectures" since a lot of professors make mistakes/are wrong plenty of times.

Taking an LLM's output at face value would be dumb, yes. But it would be equally dumb to take only what's written on a book at face value, or a YouTube video, or anyone you listen to. You have to dig in, you have to do the homework.

LLMs make it much easier for you to do this homework. Sure, they still make mistakes, but they get you 90% of the way in minutes(!) and almost for free.

I don't think it's (necessarily) equally dumb. Maybe if comparing LLM output to a book chosen at random. But I would feel much safer taking a passage from Knuth at face value than a comparable LLM passage on algorithms.

They are faster, but I don't see how they are vastly superior to a course designed and offered by a subject matter expert in the field.

You can't beat a Caltech-tier lecture, for sure. But you know many people have access to that? You do know. Thousands, and I'm being generous.

LLMs level the playing field for the other 8 billion people.

Reminds of this article[1] that was featured yesterday and which I think was great!

1: https://news.ycombinator.com/item?id=46254794

In addition to the content available on the platforms we're discussing here (Coursera and Udemy), you have things like:

https://ocw.mit.edu/

https://onlineeducation.caltech.edu/courses/certificate-gran...

they have been trained on material not just by single subject matter expert but all of them :)

They have not, because a large portion of the knowledge obtained by subject matter experts in any given field has never been published.

Also, hallucinations are still a thing, and there's a reason why LLMs do not outperform subject matter experts in nearly every field.

I was being facetious but am now extremely curious about large portion of the knowledge obtained by subject matter experts in any given field has never been published - this is not only strange to me in the small but you are claiming that this is large portion so I am wondering if you have any example(s) to share?

Ah, sorry I missed that.

Academics, whose entire careers are based on publishing knowledge, only publish a fraction of their total knowledge obtained over their career. Estimates are that only 10-20% of all knowledge is explicit.

Professionals who know there subject are still the best way

That's true, but they are also a) a lot more expensive and b) unlike LLMs, the vast majority of professionals have family and friends and need sleep and food, and as such are not available 24/7/365

Hallucinations has made huge progress over last 3 years

Yep, now there are way more sophisticated. Same amount, though.

Nah they have definitely reduced massively. I suspect that's just because as models get more powerful their answers are just more likely to be true rather than hallucinations.

I don't think anyone has found any new techniques to prevent them. But maybe we don't need that anyway if models just get so good that they naturally don't hallucinate much.

That's because they're harder to spot, not because there are less. In my field I still see the same amount. They're just not as egregious.

Not in my experience. For example models often say "no you can't do that" now whereas they used to always say you could do things and just hallucinate something if it was impossible. Of course sometimes they say you can't do things when you can (e.g. ChatGPT told me you can't execute commands at the top level of a Makefile a few months ago), but it's definitely less.

> They're just not as egregious.

Uhm yeah, that's what I'm saying. The hallucination situation has improved.

They haven't reduced one bit in my experience.

I don't know much about Coursera, but Udemy has always been quite bad since I remember.

Most drawing/painting courses are taught from people who are juniors at best. The quality is laughable compared to what you can get for free from Marco Bucci/Sinix/Proko channels. And honestly, even those high-quality videos won't teach you how to draw anyway.

That being said, I didn't realize how bad Udemy art courses were when I got started. I think that's a life lesson for me especially in the era of LLM.