After reading Hail Mary, I wondered how reasonable it was for someone to truly be able to understand a language based in tones / chords alone. Maybe 60 words per minute would be enough to communicate but it sure would be frustrating.

You might be interested to read about whistled languages, which is pretty close to that idea:

https://en.wikipedia.org/wiki/Whistled_language

I think you could get faster with a language actually meant to be 'sung' instead of this rough translation of english characters into audio.

My first thought was: “oh, that’s an interesting concept, I wonder how hard it would be to learn?”

Then I saw the frequency/time graph, and realised that didn’t seem to have been a consideration at all. This was obviously designed by a sighted person who cared more about what the pictures looked like!

Blind person: “But how do I know which letter is which?” Designer: “Oh, that’s easy! Just look at the picture!”

I love the idea of a sung language, though!

Take a look at when this was invented, it's a critical detail in evaluating all this, it was 1913! They were working with the very limited technology they had, they couldn't detect the letters and map them to a particular new tone or chord that might be easier to understand, that tech just wasn't possible [0]. They had to directly translate the image of the letters on simple photo receptors into a corresponding frequency value.

[0] As I was writing this I did have the wild thought that in theory if you had the weights already you could, in theory, implement a very basic character recognition neural net with analog circuitry using vacuum tubes that could recognize letters for direct mapping to sound but it's entirely impractical to create from scratch in reasonable time frames. Maybe over the span of decades you could manually tune one?

IIRC from reading the paper years ago, they chose the tones for each row in the column so they would provide distinct combinations of concordant and discordant sounds.

[deleted]