What can AI Language Models tell us about how textual information influences understanding of environmental issues?

Project reference: LE71

Application deadline: 10 April 2023

How to apply

To address and mitigate the potentially catastrophic impact of climate change, humans will need to make drastic changes to their behaviour. Such changes will require widespread understanding of how individual and collective human activities affect the environment, as well as practical action based on this understanding. Achieving such understanding and action will depend not only on dissemination of relevant information but also its assimilation, and critically, the motivational response that it evokes. Thus, investigation of how to present information and analysis of the influence different forms of information may have, is vital to promoting effective action in response to climate change and environmental degradation.

The neural net based Language Models developed by Artificial Intelligence researchers over the last decade (for example ELMo, BERT and, more recently, GPT3, Chat GPT) are able to learn and replicate typical patterns of natural language text to the extent that, given some natural language text input as a starting point (a prompt), a language model can generate a continuation of the input that seems highly plausible and similar to the kind of elaboration or response that a human might make given that prompt. This functionality can be used to generate text for applications such as chatbots, web page creation and user interfaces in carbon tracking smartphone apps for instance. Such models can also be used to quantify and rank the likelihood of different potential continuations. This can support functionality that derives interpretations or inferences from text. For example, one can rank a set of possible responses in order to find the one most strongly “implied” by a given prompt. Or conversely, if we would like to elicit some particular response, we could search for prompting text that would impart a high likelihood to the desired continuation.

Numerous applications show that this kind of functionality can support sophisticated and useful applications. However, the strengths and weaknesses of language models are still poorly understood, and the numerical ranking of alternative continuations of an information prompt does not have a clear semantics. Hence, the focus of this PhD project will be to investigate robustness and reliability of AI language models, with the aim of finding effective ways of informing and motivating humans in relation to climate change issues.

In particular the PhD research could focus on one or more of the following questions:

• To what extent do rankings of text prompt continuations given by language models correspond to typicality of human responses to given information?

• Are responses given to information prompts (either by language models or humans) indicative of actual influence of that information on beliefs and/or actions?

• Are there particular aspects of the form and content of information that affect:
* the way the information influences humans;
* the reliability with which AI language models can replicate human responses to the information ?

• Can textual information impart awareness of technical and quantitative aspects sufficient to make reasoned practical choices relating to environmental issues?

Since the project will investigate limitations of AI Language Model approaches, we expect that it may consider other approaches to explaining how humans interpret and respond to information. In particular following approaches are likely to be relevant:

• Semantics, knowledge representation and logical inference. Approaches to AI based on symbolic representation can capture both general principles and factual information and use these to infer conclusions or construct plans to achieve given goals.

• Psychology. Psychological experiments have revealed tendencies in the ways that humans interpret, and often misinterpret information, such as the phenomena of confirmation bias, and motivated reasoning, where people select or distort information to fit their own beliefs or self-interest.

• Sociology. Social research shows that humans are influenced by interactions with others in their social surroundings and by values and norms upheld in their social networks when processing information.

• Game theory. Game theory provides a framework for interpreting and predicting intelligent behaviour and explaining both individual and cooperative behaviour based on choices and incentives.

The project may consider how one or more of these alternative perspectives may complement the analysis in terms of AI Language Models, and could potentially explore ways of combining different approaches.

As part of the PhD work, it is envisaged that the student will use and contribute to the development of a collection of software tools assembled by Dr Bennett within a framework called by the acronym KARaML (Knowledge Assimilation using Reasoning and Machine Learning). This tool set provides interfaces to both Language Models (including BERT and GPT) as well as tools supporting semantic analysis (K-Parser) and logical reasoning (Clasp, Prover9, Vampire). The use of this software will require competency in programming (in particular using the Python language).