University of Tasmania
Browse

File(s) under embargo

Just copy-paste me! Assessing the risks of epistemic dependence on Large Language Models

chapter
posted on 2024-11-25, 05:03 authored by Alessio TaccaAlessio Tacca, Frederic GilbertFrederic Gilbert

LLMs are increasingly being employed in epistemic contexts, namely those contexts in which a subject uses LLM outputs in order to fulfill their epistemic goals, such as acquiring justification or increasing their knowledge or understanding. Relying on a LLM system to achieve epistemic goals one is incapable or not willing to achieve by other means (experience, testimony etc.) comes with epistemic risks. In our contribution, we illustrate the gradual progression of what we call the spectrum of epistemological risks, an incremental model of epistemic harms linked with usage of LLMs, starting by casual usages, to reliance, over-reliance, dependence and addiction. Dependence on LLMs seems to be notably potent since LLMs’ outputs are often relied upon uncritically, potentially generating a process of epistemic deskilling. We suggest that increasing knowledge and understanding of how LLMs work and how they sit within the context of users’ epistemic goals is a solution to mitigate epistemically dependent agents’ vulnerability. A thorough analysis of epistemic dependence to LLMs is essential to better understand the epistemic relation between users and language models, but gives also significant insights on whether or not we need even larger LLMs.

History

Publication title

Anna's AI Anthology. How to Live with Smart Machines?

Editors

A Strasser

ISBN

3942106906

Department/School

Philosophy and Gender Studies

Publisher

Xenomoi Verlag

Place of publication

Berlin

Rights statement

Copyright 2024 Xenomoi Verlag, Berlin

Usage metrics

    School of Humanities

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC