> frequency of discussion (especially within an obsessive subgroup) does not represent effective implementation
I asked the chat tool to count how many times each different programming language is mentioned in different “Show HN” post titles.
If the tool is accurate, it seems that the results diverge somewhat from what you are implying.
language post_count
Python 3117
JavaScript 2545
Go 2178
Rust 1251
TypeScript 607
Java 605
Ruby 531
PHP 514
Swift 433
Clojure 229
Elixir 173
Haskell 142
Kotlin 128
Scala 122
Lua 110
C++ 101
Erlang 61
Dart 45
Perl 35
No Lisp? On HN!? There has to be something wrong here.
I think this is a result of the strategy that the AI chose for picking languages. I saw when it was planning what to do it said that it was going to use a regex against the post titles. Probably it only included the specific languages above in that regex. Leaving some languages out. Which should still mean it hopefully has accurate numbers for the languages it chose to look for, but it might be missing several other more or less widely mentioned languages.
If I ask it specifically to count how many Show HN posts mention Lisp or Scheme in the title, it says there’s a total of 370 mentioning one or the other of those.
If we were to do a careful analysis to control for the bias of one site, we would consider more sources, for example:
https://www.tiobe.com/tiobe-index/
https://survey.stackoverflow.co/2024/technology
but this tool only analyzes hn.. why it need to consider other site? of course it can different