It's not clear for many how GPT-3 is primed to get the right outputs. Let me try to explain it with just a few tweets.
The input to GPT-3 is a single text field with multiple knobs that are scalar and categorical. Examples of these knobs is temperature and flag toxicity. Think of these knobs as like degrees of emotion.
Then there's this priming part. We know that human cognition can be primed. So if you a word if you hear the word mouse and asked to name another animal, you most likely will say cat. https://en.wikipedia.org/wiki/Priming_(psychology). .
GPT-3 learns to predict from sequences a new sequence of words. So like human priming, there is an order of events. So when you prime your inputs, there's an expectation that it has different outputs.
The way to prime is to have several examples of the form (c,x,r) where c=context,x=example,r=result. To generate the result you want you have something like c x r c x r c x r c x r c x. GPT-3 will fill in the last r.
To reduce ambiguity further, you can add prompts between each element in the triple. So like: c from: x to: r. So you write c from: x to: r end c from: x to: r end c from: x to: r end c from: x to: . GPT-3 will fill in the r and sometimes the end.
This isn't difficult to do. The art however is in the examples. What combination of examples gives one the best results? So if I were trying to have GPT-3 paraphrase my text in the style of Carl Sagan, which of Sagan's words shall I use (i.e. the r's) and which (x's) to design.
Each of these mapping of x->r gives GTP-3 subtle hints on what to do. So it isn't just blindly offering up examples, the examples must be good examples of what you want to achieve. That is where human art comes in!
In addition, each c can be different. So there's a 3 dimensional space (plus the scalar knobs) that needs to be explored to get the outputs you want.
The most useful applications that will be built from this will inhabit a niche of this massive space. It will combine well-designed UI and strong validation procedures.
You can follow @IntuitMachine.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.