There have been various papers and articles recently that discuss the ‘personality’ of GPT-3 and seek to identify its biases and perspectives. What I’m writing about today is the opposite of that. Rather than probe GPT-3 for its identity, some researchers are what is possible when GPT-3 is prompted to assume a specific identity and respond as a proxy for that identity, with all its biases.
It turns out that it can simulate another human’s perspective with sufficient fidelity that it can act as a proxy for a diverse population of humans in social experiments and surveys. This is accomplished by inventing a population of participants described by a backstory or a set of attributes including things like gender, race, income, occupation, age, political identity, etc. For each virtual person a prompt (or prompt chain) is created that establishes the identity and asks GPT-3 to effectively act as that person in a response to a question or scenario.
One example is the study “Using Large Language Models to Simulate Multiple Humans” in which four different social experiments are recreated with virtual subjects modeled to mirror the real study’s participants. The experiments were the Ultimatum Game, garden path sentences, risk aversion, and the Milgram Shock experiments. The ultimatum game is a game theory scenario as follows: There is a sum of money and two subjects, let’s say Bob and Carol. Bob must offer Carol a portion of the money. If Carol accepts, then she gets that amount of money and Bob gets the rest. If instead the offer is rejected, neither get any money.