ChatGPT Intimates a Tantalizing Future; Its core LLM is Organized on Multiple Levels; and it has Broken the Idea of Thinking, Version 3

25 Pages Posted: 25 Jan 2023 Last revised: 13 Apr 2023

Date Written: February 6, 2023

Abstract

I make three arguments. A philosophical argument: (1) The behavior of ChatGPT is so sophisticated that the ordinary concept of thinking is no longer useful in distinguishing between human behavior and the ChatGPT’s behavior. We don’t have explicit understanding about what either humans or ChatGPT are doing. Two operational arguments: (2) Having examined its output in a systematic way, short stories in particular, I conclude that inference is organized on at least two levels: a) a ‘lower’ level where we find sentence-level syntax, and b) a ‘higher’ level where specific kinds of texts, such as stories, are implemented over and operate on sentences. This is roughly analogous to the way that high-level programming languages are implemented in assembly code. (3) Consequently, that aspects of full symbolic computation are latent in LLMs. An appendix has descriptive tables showing how four stories are organized on multiple levels.

Keywords: ChatGPT, GPT, deep learning, large language models, artificial intelligence, language, story grammar, narratology, chatbot

Suggested Citation

Benzon, William L., ChatGPT Intimates a Tantalizing Future; Its core LLM is Organized on Multiple Levels; and it has Broken the Idea of Thinking, Version 3 (February 6, 2023). Available at SSRN: https://ssrn.com/abstract=4336442 or http://dx.doi.org/10.2139/ssrn.4336442

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
361
Abstract Views
1,145
Rank
151,662
PlumX Metrics