World Creation by Analogy

Latitude Team
12 min readDec 30, 2020

The magic behind how we use GPT-3 to create worlds.

AI Dungeon includes rich worlds, each with their own kingdoms and towns, factions, classes and characters. The worlds are original and varied, filled with infinite detail. Anything novel, whether made by people, like a feature film, or made by machine, such as the image in a kaleidoscope, is a recombination of previously existing features in a new way. For an author, this can be a difficult process of extrapolating consequences of world design, often creating multiple drafts and coming back to the same ideas from different angles until they find something that works. GPT-3 is different: whatever potential “ideas” it is “considering” can only have effect on the very next word produced. (This can occasionally lead to contradictions between one sentence and the next, because there is no editing process to catch them.) So it’s natural to wonder, just what kind of creative process is GPT-3 using to come up with original material?

You may have heard that GPT-3 isn’t great at reasoning. That’s pretty much true for multi-step deductive reasoning, at least with the methods we’ve come up with to use it so far. However, at analogical reasoning it is phenomenal. It can invent entire extended metaphors. Here is one that GPT-3 created. The parts in bold are our prompt, the rest is generated (later generations use everything above, including the generated parts, as the new prompt):

Here is an analogy between the solar system and the atom:
* the sun corresponds to the nucleus
* the planets correspond to the electrons
* the orbits correspond to the electron shells
* the gravity of the sun corresponds to the electrostatic attraction between the nucleus and the electrons
* the gravitational force of the sun corresponds to the Coulomb force between the nucleus and the electrons
Here is an analogy between water flow and heat transfer:
* water
corresponds to heat
* the pipes correspond to the conductors
* the faucet corresponds to the heat source
* the drain corresponds to the heat sink
* the water pressure corresponds to the temperature difference
* a water tower corresponds to a heat reservoir
* filling a pipe corresponds to heating a conductor
* emptying a pipe corresponds to cooling a conductor
* hydrodynamics corresponds to thermodynamics
Here is an analogy between a river and an orcish horde:
* The current
in the river corresponds to the momentum of the horde
* The bed of the river corresponds to the horde's territory
* The sand and rocks in the river correspond to the orcs in the horde
* The water in the river corresponds to the orcs' weapons
* The fish in the river correspond to the other creatures in the horde
* The rage of the river corresponds to the anger that drives the horde

Most discussions of GPT-3 seem to fall into two types. One type just treats the system as a whole, a black box that mysteriously produces words. The other type looks at all of the individual pieces, explaining how each step of the program works on a technical level.

You can find an excellent explanation of the second type here (explaining GPT-2) and here (explaining what is different in GPT-3). A key point to take away is the role of the attention heads, which pick out from the prompt the most important influences on the generation of the next token.

Reading explanations of this type can be frustrating, though: clearly, whatever they’re doing with the neural networks is working. But why is it working? What kinds of structures are being formed in the weights of the network that allow the whole thing to succeed as well as it does? How does changing the context change the probabilities for the next word in just the right way?

Well, no one really knows, yet, in detail. However, there is one way of looking at it that we have found pretty useful, and that is to think of the network weights for individual tokens as if they were word embeddings. We can only see the weights themselves for GPT-2, because the API doesn’t expose them, but the main difference is that GPT-3’s vectors are about eight times as long and so can hold more information.

year = 0.03 0.13 0.00 0.07 -0.02 -0.06 0.10 -0.08 0.11 0.09 0.03…

A word embedding (or semantic vector) is a long list of numbers that represents the meaning of a word. Each individual number in the list doesn’t have a meaning that you can really put into words, but that turns out not to matter so much. There are a lot of tutorials out there that explain how to make and use word embeddings (keyword: word2vec) but for our purposes, there are just two facts you need to know about word embeddings:

  1. Words with similar meanings have similar vectors
  2. If you take the average of two word embeddings, the answer will be near the two original words and their neighbors, but not near anything unrelated.

If you’re picturing these words as points on a 2-D grid, the second point might be counterintuitive, but it’s a fact— in the high-dimensional space these vectors live in, it’s essentially always true.

word embeddings mapped from 300 dimensions down to 4 (including hue and size) using tSNE

A lot of really cool properties follow from these two facts. Suppose the two words you choose to average are animal and protection. The closest two words to this average are animal and protection as you’d expect from rule 2. It will also be close to other forms of these words: animals, Protect, and so forth. Once you get past these, though, things get interesting. One of the closest words is preserve. As a synonym of protect you’d expect it to be close to protection. But another sense of the word is “animal preserve.” So close to the average, you find words that are similar in meaning to both the words you averaged. If you add pet and fish you’re going to get something close to goldfish and guppy.

The simple average is a blunt instrument, and it sometimes contains a little too much of one of the words and not enough of the meaning of the other. You can do a little better by taking a weighted average: .2 x pet + .8 x fish, or something like that.

What happens if you subtract one word from another? These are the neighbors of cap:

caps, hat, helmet, jersey, capped, uniform, shirt, bubble, limit, regulation

Most of these have in common the idea of a baseball cap. If we subtract away the vector for hat, though, the neighbors look very different:

cap - hat = caps, capped, limiting, limit, levy, regulation, restricted, tax, exemption

Now everything that has to do with hat is far away, and what is left is the meanings of cap that don’t have to do with hats — like a spending cap or a tax cap.

If you combine addition and subtraction in one equation, something even cooler happens. Let’s look at what words might be near these three and then try adding and subtracting them from each other.

bear, hiker, shark

Near bear you’d expect to find words having to do with bear: predator, woods, that sort of thing.

Near hiker you’d find woods as well, and maybe tourist or explorer.

Near shark you’re also going to find predator, and words like ocean.

Suppose we add hiker and shark. This gives a vector that is near to predator, ocean, tourist, and woods:

hiker + shark = (related to predators, oceans, tourists, and woods)

Now look at what happens when we subtract away bear. Everything having to do with woods and predator are now far away. What we’re left with is related to ocean and tourist. What has to do with both ocean and tourist? Well, something like snorkeler or scuba diver. So we have this equation:

hiker + shark - bear ≈ snorkeler

That is what is known as a four-term analogy: (see note 3 below)

bear : hiker :: shark : snorkeler

Just by doing addition and subtraction, we can form analogies! This works not just for bears and snorkelers, but for all kinds of things. For example, it works great for countries and capitals:

Italy : Rome :: Germany : Berlin

or the most frequently used example

man : king :: woman : queen

That’s nice if you want to build a tool to pass the old SAT tests, but it implies something much more than that. The arrangement of these word vectors in hyperspace has somehow captured something about the relations between words. So the arrangement of worlds not only captures which words are related, but how they are related. Also, since a word is nearby all the other ways of saying the same thing, these are the relations not only between particular words, but between concepts.

So back to what’s going on under the hood in GPT-3. The neural network in GPT-3 learns three things:

  1. A vector of network weights for each possible token (sometimes called the token’s hidden state).
  2. How heavily to weight each of the previous 2048 tokens to create a weighted sum of these hidden states from 1. This can be considered a representation of the context formed by the previous tokens.
  3. The mapping from the sum from [2] to the hidden state and associated probabilities of the next token.

For a particular input, the mapping (in 3) is a vector that goes from the weighted sum (in 2) to the encoding of the next token:

next token in training corpus — weighted sum of training context

In other words, the system seems to be learning to make analogies of the form:

weighted sums of training context : next tokens in training corpus :: weighted sum of new context : next token to be generated

By solving this equation:

weighted sum of new context + (next words in training corpus — weighted sum of training context) = next token to be generated

So the system is essentially able to form an analogy between the sentences it has seen in their contexts, and the sentence it is currently composing in its context. The next token, then, would be whatever is analogous to the next token from the sentences it has seen in the past in that position.

No one has yet established that GPT-3 is making use of these analogical geometries to decide which token to generate next. It is just a hypothesis at this point. All of the pieces are there — the tokens are arranged in a similar way to the word embeddings, and it is forming weighted sums of them — but it will take more research to establish whether this is quite what is going on.

Whatever is happening, though, GPT-3 can pick up on these simple analogies pretty easily. In the OpenAI paper, they got 65% accuracy on their analogy test set. We don’t know how they formed their prompts exactly, but we did experiment with some basic analogies a bit. For instance, with a single example, GPT-3 gets the shark : snorkeler example wrong

in [1]: prompt = """Italy : Rome :: Germany : Berlinbear : hiker :: shark :"""in [2]: openai.Completion.create(prompt=prompt, engine="davinci", temperature=0, max_tokens=50, stop="\n")['choices'][0]['text']out [3]: ' fish'

On the other hand, we find that giving GPT-3 verbal instructions for what it was doing instead of merely providing an example helped it provide the expected results.

in [1]: prompt = """A four term analogy is a particular relationship between words. Here's a few examples:Italy : Rome :: Germany : Berlinbear : hiker :: shark :"""in [2]: openai.Completion.create(prompt=prompt, engine="davinci", temperature=0, max_tokens=50, stop="\n")['choices'][0]['text']out [3]: ' swimmer'

GPT-3 being GPT-3, we’re not simply restricted to the basic analogies. Even with this simple example, we can easily start to get extrapolations on entities with multiple features such as character characteristics. With multiple examples here this is called ‘few shot transfer’ meaning that the model was able to pick up on a task with a few examples rather than retraining the whole thing.

in [1]: prompt = """A four term analogy is a particular relationship between words. Here's a few example:Italy : Rome :: Germany : BerlinMan : King :: Woman : QueenElves (Tall, Good) : Gnomes (Short, Evil) ::  Humans (Tall, Neutral) :"""in [2]: openai.Completion.create(prompt=prompt, engine="davinci", temperature=0, max_tokens=150, stop="\n")['choices'][0]['text']out [3]: ' Dwarves (Short, Neutral)'

Analogy is a powerful tool. If you can form analogies between what happened yesterday and what is going to happen tomorrow, you can plan what steps you need to prepare. Douglas Hofstadter, author of Godel, Escher, Bach, is fond of saying that “analogy is the core of cognition.” In other words, most of thinking is just being able to form analogies well.

Here’s an example of how GPT-3’s analogical reasoning has been used in AI Dungeon. When generating worlds, the following prompt was used:

Your task is to generate a rich and detailed world that a player would be excited to play in.Genre: Floating World FantasyName: AnaropWorld: Anarop is one world among many, a land of green fields and small villages with glittering cities floating above in the night sky. When a child reaches the age of 16 they must make a choice: to remain with the people of the idyllic land who grow food and live simple but pleasant lives or the people of the sky who protect Anarop from the unknown dangers beyond, living dangerous, short, and exciting lives in the clouds.Genre: Dark FantasyName: CarameaWorld: Caramea is a dualistic world of human and demon. Caramea is home to towering, impenetrable mountains and the lush valleys between them where the few human kingdoms have built safe cities with walls against the demon hoards. Human kingdoms struggle to survive and grow, always living under the fear of the hellish demons and their twisted spawn. Against such a nightmarish upwelling the likes of which hasn’t been seen for generations, the peoples of Caramea are finally starting to band together.Genre: FantasyName:World:

This is an example of a few-shot prompt like before. We give a few examples, and the generated text creates a new example on the same lines. Here is an example of what was generated on one particular run (every run is different, because with the temperature parameter set above zero so there is some randomization of token selection based on probabilities to yield variability):

Name: BitterwoodWorld: Bitterwood is a frozen wasteland of a world, its people and creatures locked in a constant battle for survival against the bitter cold, ferocious monsters, and the harsh elements. The people of Bitterwood are hardy folk who have fought against the cold for centuries, and even the most civilized of cities are small and remote. The people of Bitterwood are united in their fight for survival, and the numerous city-states and townships have joined together to form the Alliance. The Alliance is a peace treaty of commerce and defense. The Alliance’s goal is to unite and secure Bitterwood, and it’s a goal that is long overdue.

The new example is analogous to the previous examples. Each of them started the world description with the name of the world, followed by a description of the setting and ended with a description of the factions. GPT-3 picked up on those patterns as it generated the description of Bitterwood. The genre “Fantasy” biases the description to mention ferocious monsters and city-states, rather than the flying cities of the Floating World Fantasy. It is a far more sophisticated analogy than the simple four-term analogies, but the principle involved is the same:

Flying World Fantasy : Anarop :: Fantasy : Bitterwood

This isn’t the only thing happening when GPT-3 generates text, of course. Somehow, these analogical concepts are transformed into grammatical sentences that follow one on the next in a sensible way, at least for the space of a paragraph or two. But analogy seems to be a key source of the rich variety we see in what it generates, and this vein of creative potential is what we are mining to build AI Dungeon, and our other games in development, into something that can be endlessly fascinating.

Notes:

  1. How to improve its deductive reasoning process by generating intermediate results will be the topic of a future blog post.
  2. If you want to read a scientific paper that explores this semantic space approach for understanding BERT, another Transformer model for natural language processing, you can find one here.
  3. You can read analogies like this as saying that “the relation between a bear and a hiker is the same as the relation between a shark and a snorkeler”, or, rearranging, that “the relation between a bear and a shark is the same as the relation between a hiker and a snorkeler.”

--

--