> As you increase the size of the input data, the accuracy gradually decreases.
Interesting.
On your section "Limitations and Areas for Further Study",
What I'd be curious on future work would be,
- changing the order of the data on each table type
- changing the order of the questions
I'm curious to know if what it fails is the same, if it changes depending on the location, if it's a bias.Is it always a specific question? Is it always a specific value? Is it always question #x (or around question #x?). Does it tend towards x or y on types of questions?
Good idea
LLMs have documented position biases, with skew towards first and last. This is strongest in messages due to system prompt + current question training data, but it's present in list data in general.
Exactly. But the papers I’ve seen, the tests are done based on answers being multiple choice usually.
In this case, the questions asked have an answer. The bias would then be on the order of the input data. It’s different enough that it triggered my curiosity.https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00638...