I don't see the problem.

    for _, filename := range files {
        queue <- func() {
            f, _ := os.Open(filename)
            defer f.Close()
        }
    }
or more realistically,

    var group errgroup.Group
    group.SetLimit(10)
    for _, filename := range files {
        group.Go(func() error {
            f, err := os.Open(filename)
            if err != nil {
                return fmt.Errorf("failed to open file %s: %w", filename, err)
            }
            defer f.Close()  
            // ...
            return nil          
        })
    }
    if err := group.Wait(); err != nil {
        return fmt.Errorf("failed to process files: %w", err)
    }
Perhaps you can elaborate?

I did read your code, but it is not clear where the worker queue is. It looks like it ranges over (presumably) a channel of filenames, which is not meaningfully different than ranging over a slice of filenames. That is the original, non-concurrent solution, more or less.

I think they imagine a solution like this:

    // Spawn workers
    for _ := range 10 {
        go func() {
            for path := range workQueue {
                fp, err := os.Open(path)
                if err != nil { ... }
                defer fp.Close()
                // do work
            }
        }()
    }

    // Iterate files and give work to workers
    for _, path := range paths {
        workQueue <- path
    }

Maybe, but why would one introduce coupling between the worker queue and the work being done? That is a poor design.

Now we know why it was painful. What is interesting here is that the pain wasn't noticed as a signal that the design was off. I wonder why?

We should dive into that topic. I suspect at the heart of it lies why there is so much general dislike for Go as a language, with it being far less forgiving to poor choices than a lot of other popular languages.

I think your issue is that you're an architecture astronaut. This is not a compliment. It's okay for things to just do the thing they're meant to do and not be super duper generic and extensible.

It is perfectly okay inside of a package. Once you introduce exports, like as seen in another thread, then there is good reason to think more carefully about how users are going to use it. Pulling the rug out from underneath them later when you discover your original API was ill-conceived is not good citizenry.

But one does still have to be mindful if they want to write software productively. Using a "super duper generic and extensible" solution means that things like error propagation is already solved for you. Your code, on the other hand, is going to quickly become a mess once you start adding all that extra machinery. It didn't go unnoticed that you conveniently left that out.

Maybe that no longer matters with LLMs, when you don't even have to look the code and producing it is effectively free, but LLMs these days also understand how defer works so then this whole thing becomes moot.