Oh, well awesome. Glad to see you are getting so much out of the Strix line. I am eagerly awaiting the next gen. I think that will be a tipping point in AMD’s favor. I am a bit of an AMD nerd, even though they don’t seem to love their developers as much as Nvidia.
Before anyone gives me grief my company has a strategic partnership with Nvidia, I do AMD under the cover of darkness. So I live in both worlds. I’m a bleeding heart for the under dog…if being a 360B market cap company makes you the “under dog”.
Are you basing that on your informed first hand experience, or based on your assumptions backed by no actual experience using the hardware?
I don't know what you want me to tell you, you're welcome to believe whatever you want but that doesn't change the reality I experience actually using the thing.
Benchmark numbers and first hand reviews are readily available if you bothered to look.
Pretty sure they mean the GB10
Nope, I was referring to Strix Halo
Oh, well awesome. Glad to see you are getting so much out of the Strix line. I am eagerly awaiting the next gen. I think that will be a tipping point in AMD’s favor. I am a bit of an AMD nerd, even though they don’t seem to love their developers as much as Nvidia.
Before anyone gives me grief my company has a strategic partnership with Nvidia, I do AMD under the cover of darkness. So I live in both worlds. I’m a bleeding heart for the under dog…if being a 360B market cap company makes you the “under dog”.
Strix Halo
I don't believe you. It has very poor compute.
Are you basing that on your informed first hand experience, or based on your assumptions backed by no actual experience using the hardware?
I don't know what you want me to tell you, you're welcome to believe whatever you want but that doesn't change the reality I experience actually using the thing.
Benchmark numbers and first hand reviews are readily available if you bothered to look.
I am basing it on benchmark numbers. It's compute is just too poor to be useful for LLMs or Image generation.
For example: For LLMs, it's easy to do the math, and see how long you will be waiting for 50k input tokens.