Do AIs Know What the Most Important Issue is? Using Language Models to Code Open-Text Social Survey Responses At Scale
171 Pages Posted: 27 Dec 2022 Last revised: 29 Aug 2023
Date Written: August 27, 2023
Abstract
Can artificial intelligence accurately label open-text survey responses? We compare the accuracy of six Large Language Models (LLMs) using a few-shot approach, three supervised learning algorithms (SVM, DistilRoBERTa and a neural network trained on BERT embeddings), and a second human coder on the task of categorizing “most important issue” responses from the British Election Study Internet Panel into 50 categories. For the scenario where a researcher lacks existing training data, the accuracy of the highest-performing LLM (Claude-1.3: 93.9%) neared human performance (94.7%) and exceeded the highest-performing supervised approach trained on 1,000 randomly-sampled cases (neural network: 93.4%). In a scenario where previous data has been labeled but a researcher wants to label novel text, the best LLM’s (Claude-1.3: 80.9%) few-shot performance is only slightly behind the human (88.6%) and exceeds the best supervised model trained on 576,000 cases (DistilRoBERTa: 77.6%). PaLM-2, Llama-2, and the SVM all performed substantially worse than the best LLMs and supervised models across all metrics and scenarios. Our results suggest that LLMs may allow for greater use of open-ended survey questions in the future.
Keywords: ChatGPT,GPT-4,GPT-3.5,large language models,LLMs,most important issue,MII,most important problem,MIP, open text,public opinion, text coding, text as data, claude, anthropic, llama 2, PaLM, replicate
Suggested Citation: Suggested Citation