> Wouldn’t this method be good if applied on humans in job interviews?
Uhm, no? I mean, some firms do abuse job interviews to pump candidates for usable information, and some have gotten a notable bad reputation for that which impacts their funnel of candidates, but from the article: “Generating comprehensive datasets requires thousands of model calls per topic”—you aren’t going to get a candidate to hang around for that...
There are some fun early theoretical ML papers on this topic.
They prove that it is possible to fully clone a brain based on this method.
I think one could theoretically estimate how many queries you would need to make to do it. The worst case is proportional to the number of parameters of the model, i.e. at least 10^15 for a human. At one minute per spoken sample, that comes out to about 2 billion years to clone one human.
I suspect it is not practical without advancements in neural link to increase the bandwidth by billions of times.
I personally like playing around with empirical methods like this blog post to understand the practical efficiency of our learning algorithms like back prop on transformers.
I also try not to invest too much effort into this topic given the ethical issues.
> Wouldn’t this method be good if applied on humans in job interviews?
Uhm, no? I mean, some firms do abuse job interviews to pump candidates for usable information, and some have gotten a notable bad reputation for that which impacts their funnel of candidates, but from the article: “Generating comprehensive datasets requires thousands of model calls per topic”—you aren’t going to get a candidate to hang around for that...
That is evil, no, I was thinking more about selective knowledge exploration to see if a candidate is fit for the position.
how long would it take to do a complete memory dump of your brain by voice stream? days? months? years?
this is more like writing one's autobiography.
There are some fun early theoretical ML papers on this topic.
They prove that it is possible to fully clone a brain based on this method.
I think one could theoretically estimate how many queries you would need to make to do it. The worst case is proportional to the number of parameters of the model, i.e. at least 10^15 for a human. At one minute per spoken sample, that comes out to about 2 billion years to clone one human.
I suspect it is not practical without advancements in neural link to increase the bandwidth by billions of times.
I personally like playing around with empirical methods like this blog post to understand the practical efficiency of our learning algorithms like back prop on transformers.
I also try not to invest too much effort into this topic given the ethical issues.
I was thinking about selective knowledge exploration to see if the candidate is fit for the offered position. No need to dump everything