Every use of AI has its own problem of "person with 10 fingers" that AI image generation faces and can't seem to solve. For programmers, it's code that calls made up libraries and makes up language semantics. In prose, it's completely incoherent narratives that forget where they are going halfway through. For programmers it's making up case law and citations. Same for scientists, making up authorities and papers and results.

AI art is getting better but still it's very easy for me to quickly distinguish AI result from everything else, because I can visually inspect the artifacts and it's usually not very subtle.

I'm not a radiologist, but I would imagine AI is doing the same thing here, making up things that are cancer, missing things that aren't cancer, and it takes an expert to distinguish the false positives from true. So we're back at square one, except the expertise has shifted from interpreting the image to interpreting the image and also interpreting the AI.

All of the examples you gave (which I agree with, btw!), are generative AI, whereas I assume radiology would benefit more from the Machine Learning (ML), image in -> black-box ML decides whether it matches pattern -> verdict out, type of AI.

I suppose first of all, is that generally agreed? People aren't expecting a LLM to give a radiology opinion, the same as way that you can feed in a PDF or an image into ChatGPT and ask it something about it, are they?

I'm interested whether most people here have a higher opinion of ML than of the generative AIs, in terms of giving a reliably useful output. Or do a lot of you think that these also just create so much checking it would be easier to just have a human do the original work?

I think it's probably worth excluding self-driving from my above question, since that is a particularly difficult area to agree anything on.

> AI art is getting better but still it's very easy for me to quickly distinguish AI result from everything else, because I can visually inspect the artifacts and it's usually not very subtle.

I actually disagree in that it's not easy for me at all to quickly distinguish AI images from everything else. But I think we might differ what we mean by "quickly". I can quickly distinguish AI if I am looking. But if I'm mindlessly doomscrolling I cannot always distinguish 'random art of an attractive busty woman in generic fantasy armor that a streamer I follow shared' as AI. I cannot always distinguish 'reply-guy profile picture that's like a couple dozen pixels in dimensions' as AI. I also cannot always tell if someone is using a filter if I'm looking for maybe 5 seconds tops while I scroll.

AI art is easy to pick out when no effort was made to deviate from the default style that the models use. Where the person put in a basic prompt of the desired contents ("man freezing on a bed") and calls it a day. When some craftsmanship is applied to make it more original, that's when it gets progressively harder to catch it at first glance. Though I'd argue that it's more transformative and thus warrants less criticism than the lazy usage.

As a related aside, I've started seeing businesses clearly using ChatGPT for their logos. You can tell from the style and how much random detail there is contrasted with the fact that it's a small boba tea shop with two employees. I am still trying to organize my thoughts on that one.

Edit:

Example: https://cloudfront-us-east-1.images.arcpublishing.com/brookf...