I don't think any of the use cases suggested really make sense though. For a compute-intense task like audio or video processing, or for scientific computing where there's likely to be a requirement to fetch a ton of data, the browser is the wrong place to do that work. Build a frontend and make an API that runs on a server somewhere.
As for cryptography, trusting that the WASM build of your preferred library hasn't introduced any problems demonstrates a level of risk tolerance that far exceeds what most people working in cryptography would accept. Besides, browsers have quite good cryptographic APIs built in. :)
The browser often runs on an immensely powerful computer. It's a waste to use that power only for a dumb terminal. As a matter of fact, my laptop is 6 years old by now, and considerably faster than the VPS on which our backend runs.
I let the browser do things such as data summarizing/charting, and image convolution (in Javascript!). I'm also considering harnassing it for video pre-processing.
> For a compute-intense task like audio or video processing, or for scientific computing where there's likely to be a requirement to fetch a ton of data, the browser is the wrong place to do that work.
... I mean... elaborate?
Everytime I've heard somebody say this, it's always a form of someone stuck in the 90s/00s where they have this notion that browsers showing gifs is the ceiling and that real work can only happen on the server.
Idk how common this is now, but a a few years ago (~2017) people would show projects like figma tha drew a few hundred things on screen and people would be amazed. Which is crazy, because things like webgl, wasm, webrtc, webaudio are insanely powerful apis that give pretty low level access. A somewhat related idea are people that keep clamoring for dom access in wasm because, again, people have this idea that web = webpage/dom, but that's a segway into a whole other thing.
I don't think any of the use cases suggested really make sense though. For a compute-intense task like audio or video processing, or for scientific computing where there's likely to be a requirement to fetch a ton of data, the browser is the wrong place to do that work. Build a frontend and make an API that runs on a server somewhere.
As for cryptography, trusting that the WASM build of your preferred library hasn't introduced any problems demonstrates a level of risk tolerance that far exceeds what most people working in cryptography would accept. Besides, browsers have quite good cryptographic APIs built in. :)
> For a compute-intense task
The browser often runs on an immensely powerful computer. It's a waste to use that power only for a dumb terminal. As a matter of fact, my laptop is 6 years old by now, and considerably faster than the VPS on which our backend runs.
I let the browser do things such as data summarizing/charting, and image convolution (in Javascript!). I'm also considering harnassing it for video pre-processing.
You can take advantage of that power via WebGPU, or WebGL if the browser is not yet up to it.
> For a compute-intense task like audio or video processing, or for scientific computing where there's likely to be a requirement to fetch a ton of data, the browser is the wrong place to do that work.
... I mean... elaborate?
Everytime I've heard somebody say this, it's always a form of someone stuck in the 90s/00s where they have this notion that browsers showing gifs is the ceiling and that real work can only happen on the server.
Idk how common this is now, but a a few years ago (~2017) people would show projects like figma tha drew a few hundred things on screen and people would be amazed. Which is crazy, because things like webgl, wasm, webrtc, webaudio are insanely powerful apis that give pretty low level access. A somewhat related idea are people that keep clamoring for dom access in wasm because, again, people have this idea that web = webpage/dom, but that's a segway into a whole other thing.
great points, agreed
also "segway" is a scooter, "segue" is a narrative transition