Naive question, could agents help speed up building code for ROCm parity with CUDA? Outside of code, what are the bottlenecks for reaching parity?
Naive question, could agents help speed up building code for ROCm parity with CUDA? Outside of code, what are the bottlenecks for reaching parity?
to be honest, outside of fullstack and basic MCU stuff, these agents aren't very good. Whenever a sufficiently interesting new model comes out I test it on a couple problems for android app development and OS porting for novel cpu targets and we still haven't gotten there yet. I'd be happy to see a day where it was possible however
I’ve found they’re quite good when you’re higher in the compiler stack, where it’s essentially a game of translating MLIR dialects.
it'd be nice if one of these environment labs made an environment for cross-architecture porting, it'd be really cool to see some old ppc mac programs running natively, or compiled to wasm (yes, yes I know the visual elements would need to be ported as well)
Lack of focus from AMD management. See the sibling comment: https://news.ycombinator.com/item?id=47745611
They just don't care enough to compete.
Agents work great for tasks that thousands of developers have done before. This isn't one of those tasks.
Unless you train them with RL in the right task specifically
Maybe this is dumb but at the moment through windows (and WSL?) you get: rocm DirectML Vulkan OpenML?