> You are also explicitly saying that you want device memory by specifying DEVICE_LOCAL_BIT. There's no difference.

There is. One is a simple malloc call, the other uses arguments with numerous combinations of usage flags which all end up doing exactly the same, so why do thy even exist.

> You _have_ to be able to allocate both on host and device.

cuMemAlloc and cuMemAllocHost, as mentioned before.

> Because there's such a thing as accessing GPU memory from the host

Never had the need for that, just cuMemcpyHtoD and DtoH the data. Of course host-mapped device memory can continue to exist as a separate, more cumbersome API. The 256MB limit is cute but apparently not relevant im Cuda where I've been memcpying buffers with GBs in size between host and device for years.

> No, because if that's the only way to allocate memory, how are you going to allocate staging buffers for the CPU to write to?

With the mallocHost counterpart.

cuMemAllocHost, so a theoretic vkMallocHost, gives you pinned host memory where you can prep data before sending it to device with cuMemcpyHtoD.

> This is how you end up with a zillion flags.

Apparently only if you insist on mapped/host-visible memory. This and usage flags never ever come up in Cuda where you just write to the host buffer and memcpy when done.

> This is the reason I keep bringing UMA up but you keep brushing it off.

Yes I think I now get why keep bringing up UMA - because you want to directly access buffers between host or device via pointers. That's great, but I don't have the need for that and I wouldn't trust the performance behaviour of that approach. I'll stick with memcpy which is fast, simple, has fairly clear performance behaviours and requires none of the nonsense you insist on being necessary. But what I want isn't either this or that approach, I want the simple approach in addition what exists now, so we can both have our cakes.

What exactly is the difference between these?

cuMemAlloc -> vmaAllocate + VMA_MEMORY_USAGE_GPU_ONLY

cuMemAllocHost -> vmaAllocate + VMA_MEMORY_USAGE_CPU_ONLY

It seems like the functionality is the same, just the memory usage is implicit in cuMemAlloc instead of being typed out? If it's that big of a deal write a wrapper function and be done with it?

Usage flags never come up in CUDA because everything is just a bag-of-bytes buffer. Vulkan needs to deal with render targets and textures too which historically had to be placed in special memory regions, and are still accessed through big blocks of fixed function hardware that are very much still relevant. And each of the ~6 different GPU vendors across 10+ years of generational iterations does this all differently and has different memory architectures and performance cliffs.

It's cumbersome, but can also be wrapped (i.e. VMA). Who cares if the "easy mode" comes in vulkan.h or vma.h, someone's got to implement it anyway. At least if it's in vma.h I can fix issues, unlike if we trusted all the vendors to do it right (they wont).

> I want the simple approach in addition what exists now, so we can both have our cakes.

The simple approach can be implemented on top of what Vulkan exposes currently.

In fact, it takes only a few lines to wrap that VMA snippet above and you never have to stare at those pesky structs again!

But Vulkan the API can't afford to be "like CUDA" because Vulkan is not a compute API for Nvidia GPUs. It has to balance a lot of things, that's the main reason it's so un-ergonomic (that's not to say there were no bad decisions made. Renderpasses were always a bad idea.)

> In fact, it takes only a few lines to wrap that VMA snippet above and you never have to stare at those pesky structs again!

If it were just this issue, perhaps. But there are so many more unnecessary issues that I have no desire to deal with, so I just started software-rasterizing everything in Cuda instead. Which is way easier because Cuda always provides the simple API and makes complexity opt-in.