I miss in this whole thread why this is happening. Presumably to be transparent whether code has been co-written by AI?

What's in it for Microsoft?

If we accept that AI can't copyright or own IP rights on something, then why? I have a sneaky suspicion that there's some lobbying in the works to overturn that ruling going forward. In the past, it was OK to build models from copyrighted data etc one might have found on the wayside. But, in the future, no such thing for you. Everything generated by the AIs will then belong (at least partly) to the megacorps (maybe THEY can co-own the copyright if the AI cannot). Nice pulling-up-the ladder if true.

This could also be a move against other countries' IP position.

I've seen the explanation from dimitriv [1], but I am not convinced. These markings achieve very little, as people can clearly work around it by copy-pasting code from another place, or using other companies tools, like claude code or antigravity (or, not even use the GUI)

I suppose the answer might just be "don't attribute to malice ...", even if Microsoft has proven us wrong before; they generally know exactly what they are doing strategically.

I guess, in a few years we will know.

[1] https://news.ycombinator.com/threads?id=dmitriv#47991835

The change was about helping teams ensure AI-generated code is attributed in commits - nothing to do with copyrights and the like. If you don't have to take my word for it - query VS Code repo for changes and issues that went into implementing this and you will see.

Thanks for jumping in the conversation. Logically it does makes sense to attribute the authors correctly, however in this context it might be helpful if you can provide any details about the users complaining that their PR's are being marked as co-authored even when they have not used the copilot? Is that intentional or a missed check in the implementation.

Also for layman readers like me who might not be actively involved, it might have been helpful to add the issue/referenced conversation why this change was made on the PR itself

The fact that non-AI changes are attributed to Copilot is a bug. The intent was to allow customers to add attribution of AI-generated code. As with any bug, it was not intetional.

you intentionally ignored internal reports of if not working

Most odd things can be explained by imagining who might be able to buy a new boat as a result.

Yeah, that's what I'm saying, too. I'm just not sure how to connect the dots here.