PhilSci Archive

Holding Large Language Models to Account

Miller, Ryan (2023) Holding Large Language Models to Account. Proceedings of the AISB Convention 2023. pp. 7-14.

[img]
Preview
Text
llmaccountability-paper.pdf

Download (284kB) | Preview

Abstract

If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.


Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL
Social Networking:
Share |

Item Type: Published Article or Volume
Creators:
CreatorsEmailORCID
Miller, Ryanryan.miller@unige.ch0000-0003-0268-2570
Keywords: Large Language Models, authorship, responsibility, reference, hallucinations
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Artificial Intelligence
Specific Sciences > Cognitive Science > Concepts and Representations
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Science and Policy
Depositing User: Ryan Miller
Date Deposited: 15 May 2023 13:00
Last Modified: 15 May 2023 13:00
Item ID: 22103
Journal or Publication Title: Proceedings of the AISB Convention 2023
Publisher: Society for the Study of Artificial Intelligence and the Simulation of Behaviour
Official URL: https://aisb.org.uk/wp-content/uploads/2023/05/ais...
Subjects: Specific Sciences > Artificial Intelligence > AI and Ethics
Specific Sciences > Artificial Intelligence
Specific Sciences > Cognitive Science > Concepts and Representations
Specific Sciences > Artificial Intelligence > Machine Learning
General Issues > Science and Policy
Date: 2023
Page Range: pp. 7-14
URI: https://philsci-archive-dev.library.pitt.edu/id/eprint/22103

Monthly Views for the past 3 years

Monthly Downloads for the past 3 years

Plum Analytics

Actions (login required)

View Item View Item