I don't see how it's disappointing? 95% correct using the 35b model before the right quants came out on a laptop? And they still got tons of code written for them.
On a real GPU using 27b with the latest quants the experience is better. It's still not the same as opus running on a subsidized GPU farm. Well it is better for privacy at least.
I find it interesting how 2 people can read the same thing and come to very different conclusions.