Kind of! This script is assuming that you're dealing with a byte slice, which means you've already encoded your unicode data.
If you just encoded your string to bytes naïvely, it will probably-mostly still work, but it will get some combining characters wrong if they're represented differently in the two sources you're comparing. (eg, e-with-an-accent-character vs. accent-combining-character+e)
If you want to be correct-er you'll normalize your UTF string[1], but note that there are four different defined ways to do this, so you'll need to choose the one that is the best tradeoff for your particular application and data sources.
[1]: https://en.wikipedia.org/wiki/Unicode_equivalence#Normalizat...
> If you just encoded your string to bytes naïvely
By "naïvely" I assume you mean you would just plug in UTF-8 bytestrings for haystack & needle, without adjusting the implementation?
Wouldn't the code still need to take into account where characters (code points) begin and end, though, in order to prevent incorrect matches?
IDK what "encoded your string to bytes naively" means personally. There is only one way to correctly UTF-8 encode a sequence of Unicode scalar values.
In any case, no, this works because UTF-8 is self synchronizing. As long as both your needle and your haystack are valid UTF-8, the byte offsets returned by the search will always fall on a valid codepoint boundary.
In terms of getting "combining characters wrong," this is a reference to different Unicode normalization forms.
To be more precise... Consider a needle and a haystack, represented by a sequence of Unicode scalar values (typically represented by a sequence of unsigned 32-bit integers). Now encode them to UTF-8 (a sequence of unsigned 8-bit integers) and run a byte level search as shown by the OP here. That will behave as if you've executed the search on the sequence of Unicode scalar values.
So semantically, a "substring search" is a "sequence of Unicode scalar values search." At the semantic level, this may or may not be what you want. For example, if you always want `office` to find substrings like `office` in your haystack, then this byte level search will not do what you want.
The standard approach for performing a substring search that accounts for normalization forms is to convert both the needle and haystack to the same normal form and then execute a byte level search.
(One small caveat is when the needle is an empty string. If you want to enforce correct UTF-8 boundaries, you'll need to handle that specially.)
By naively, I meant without normalization.
You know much more about this than I do though
edit: this is what I mean for example, that `tést` != `tést` in rg, because \ue9 (e with accent) != e\u0301 (e followed by combining character accent)
edit 2: if we normalize the UTF-8, the two strings will match Which you know, and indicate! Just working an example of it that maybe will help people understand, I dunnoThanks for this detailed answer!