I have never seen any AI system could explain correctly on the following Golang code:
package main
func alwaysFalse() bool {
return false
}
func main() {
switch alwaysFalse() // don't format the code
{
case true:
println("true")
case false:
println("false")
}
}
> Go community was trained for the longest time not to make backward-incompatible API updates so that helps quite a bit in consistency of dependencies across timeNot true for Go 1.22 toolchains. When you use Go 1.21-, 1.22 and 1.23+ toolchains to build the following Go code, the outputs are not consistent:
//go:build go1.21
package main
import "fmt"
func main() {
for counter, n := 0, 2; n >= 0; n-- {
defer func(v int) {
fmt.Print("#", counter, ": ", v, "\n")
counter++
}(n)
}
}
You're bringing up exceptions rather than a rule. Sure you can find things they mess up. The whole premise of a lot of the "AI" stuff is approximately solving hard problems rather than precisely solving easy ones.
The opposite is true, they sometimes guess correctly, even a broken watch is right two times a day.
I believe future AI systems can make correct answers. The rule is clearly specified in Go specification.
BTW, I haven't found an AI system can get the correct output for the following Go code:
What do you base that prediction on? Without a fundamental shift in the underlying technology, they will still just be guessing.
Because I am indeed experiencing the fact that AI systems do better and better.
It can easily explain it with a little nudge.
Not sure why you feel smug about knowing such a small trivia, ‘gofmt’ would rewrite it to semicolon anyway.
I write code in notebook++ and never format my code. :D