I don't think this is the best option, there could be very hard bugs or performance cliffs. I think I'd rather have an explicit opt-in, rather than the abstraction changing underneath me. Have my IDE scream at me and make me consider if I really need the opt-in, or if I should restructure.
Although I do agree with the sentiment of choosing a construct and having it optimize if it can. Reminds me of a Rich Hickey talk about sets being unordered and lists being ordered, where if you want to specify a bag of non-duplicate unordered items you should always use a set to convey the meaning.
It's interesting that small hash sets are slower than small arrays, so it would be cool if the compiler could notice size or access patterns and optimize in those scenarios.
Right, sql optimizers are a good example - in theory it should "just know" what is the optimal way of doing things, but because these decisions are made at runtime based on query analysis, small changes to logic might cause huge changes in performance.