I would call it an illusion because if you pay attention you can clearly see that the color you perceive isn't actually present. You can see white on an RBG computer screen since your eyes simply don't have the resolution to discern the subpixel colors. However, in a dithered image with only black and white, you perceive gray, but you can also tell what the reality is without much effort. Personally, I think that fits the definition of an illusion.

In the case of dithering, that’s only because the monitor has insufficient resolution. Put a 1:1 Floyd steinberg dithered image on your phone, hold it at arm’s length, and unless you have superhuman vision you’ll already start having a hard time seeing the structure.

If you look at analogue B&W film for instance (at least the ones I’m familiar with), each individual crystal is either black or white. But the resolution is so high you don’t perceive it unless you look under a microscope, and if you scan it, you need very high res (or high speed film) to see the grain structure.

Dithering is not an illusion because the shades are actually still there. With the correct algorithms, you could upscale an image, dither it, down res it, and get back the exact same tones. The data isn’t “faked”, it’s just represented in a different way.

If you’re calling it an illusion, you’d have to call pretty much every way we have of representing an image, from digital to analog, an illusion. Fair, but I’d rather reserve the term for when an image is actually misinterpreted.

I would define an illusion as something where your perception of a thing differs from the reality of the thing in a way that matters in the current context. If we were discussing how LCD screens work, I would call the color white an illusion, but if we were discussing whether to make a webpage background white or red, I would not call the color white an illusion.

That's verisimilitude. We were doing that with representational art way before computers, and even doing stipple and line drawing to get "tonal indications without tonal work". Halftone, from elsewhere in the thread, is a process that does similar. When you go deeper into art theory verisimilitude comes up frequently as something that is both of practical use(measure carefully, use corrective devices and appropriate drafting and markmaking tools to make things resemble their observed appearance) and also something that usually isn't the sole communicative goal.

All the computer did was add digitally-equivalent formats that decouple the information from its representation: the image can be little dots or hex values. Sampling theory lets us perform further tricks by defining correspondences between time, frequency and amplitude. When we resample pixel art using conventional methods of image resizing, it breaks down into a smeary mess because it's relying on certain artifacts of the representational scheme that differ from a photo picture that assumes a continuous light signal.

Something I like doing when drawing digitally is to work at a high resolution using a non-antialiased pixel brush to make black and white linework, then shrink it down for coloring. This lets me control the resulting shape after it's resampled(which, of course, low-pass filters it and makes it a little more blurry) more precisely than if I work at target resolution and use an antialiased brush; with those, lines start to smudge up with repeated strokes.

Do you consider the color yellow on your RGB monitor an illusion? (I do)

Same. A fun fact about this is as you increase the bit depth, the percentage of faked outputs actually increases as well. With just 8 bits, you have more 9's than AWS this year!

You can also add a temporal dimension (-> temporal dithering, also known as FRC).

For example if you alternate blue and red every frame at 60~120 FPS, the only thing you'll see is purple.

With red/blue artifacts visible when the viewer’s gaze passes rapidly across it.

I personally wouldn't, but it's close enough that I'm not going to disagree.