File(s) under embargo
Just copy-paste me! Assessing the risks of epistemic dependence on Large Language Models
LLMs are increasingly being employed in epistemic contexts, namely those contexts in which a subject uses LLM outputs in order to fulfill their epistemic goals, such as acquiring justification or increasing their knowledge or understanding. Relying on a LLM system to achieve epistemic goals one is incapable or not willing to achieve by other means (experience, testimony etc.) comes with epistemic risks. In our contribution, we illustrate the gradual progression of what we call the spectrum of epistemological risks, an incremental model of epistemic harms linked with usage of LLMs, starting by casual usages, to reliance, over-reliance, dependence and addiction. Dependence on LLMs seems to be notably potent since LLMs’ outputs are often relied upon uncritically, potentially generating a process of epistemic deskilling. We suggest that increasing knowledge and understanding of how LLMs work and how they sit within the context of users’ epistemic goals is a solution to mitigate epistemically dependent agents’ vulnerability. A thorough analysis of epistemic dependence to LLMs is essential to better understand the epistemic relation between users and language models, but gives also significant insights on whether or not we need even larger LLMs.