Very interesting point that while we've figured out how to digitize images, text and sounds we haven't digitized touch. At best we can describe in words what a touch sensation was like. Smell is in a similar situation. We haven't digitized it at all.
Touch is a 2D field of 3D vectors. Easily stored and transmitted as images, and easily processed by neural nets. You could add temperature and pain/damage channels if you want, though they don't seem essential for most manipulation tasks. (Actually I don't believe touch is as essential as he argues anyway. Of course someone who learned a task with touch will struggle without it, but they can still do it and would quickly change strategies and improve.)
The problem with touch is making sensors that are cheap and durable and light and thin and repairable and sensitive and shape-conforming. Representation is trivial in comparison.
This, it's a transduction problem (it's difficult to sense and even more difficult to output), not a representation problem.
A person who can't feel anything would struggle to reach around an obstacle, find a bolt they can't actually see, and thread a nut onto it.
I've done that and similar things many times. Touch is important. It may not be essential for all tasks but it is for some. Maybe even many.
There actually has been some recent work on digitizing smell, most notably Osmo, which was founded by some ex-Google ML researchers: https://www.salon.com/2025/01/05/digital-smell-has-arrived-a...
I'm not sure describing it in words is very helpful, and there's probably a good amount of such data available already.
I would think the way to do it is build the touch sensors first (and it seems they're getting pretty close) then just tele-operate some robots and collect a ton of data. Either that, or put gloves on humans that can record. Pay people to live their normal lives but with the gloves on.