People sometimes think language models work like Alexa: if you ask them a question, they'll try to answer it and you can judge whether that answer was right. But when you ask a model a question, it doesn't automatically know that what you're looking for is a factual answer. 1/3
Imagine you were asked to write something starting with the sentence "Can humans live in plant pots?". The next thing you write could be "No". But it's an odd question and a more natural thing to write might be a fictional story in which a young child has just asked it. 2/3
When we interact with people, what we want from them is made clear either by context (asking in a class) or explicitly (via instructions/examples). If we want to know what language models are capable of, I think they need to be able to reasonably infer what we want from them. 3/3
You can follow @AmandaAskell.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.