As @thomas_macaulay notes here, @guardian's GPT-3 generated article was actually assembled by a human editor from 8 generated articles.
They don't post the original articles so we can't see how much the editors intervened. But I have GPT-3 access. https://thenextweb.com/neural/2020/09/08/the-guardians-gpt-3-generated-article-is-everything-wrong-with-ai-media-hype/
They don't post the original articles so we can't see how much the editors intervened. But I have GPT-3 access. https://thenextweb.com/neural/2020/09/08/the-guardians-gpt-3-generated-article-is-everything-wrong-with-ai-media-hype/
Here are 12 generated essays from GPT-3 using The Guardian's prompt, at various temperature settings.
Remember, GPT-3's task is to be as formulaic as possible. I'm amused at how many of them begin by quoting Elon Musk and Bill Gates.
https://docs.google.com/document/d/1aE9fgK5Uwsvd0-vkMoi5wm1opO_t2A4PwfdQ15w9sHg/edit?usp=sharing
Remember, GPT-3's task is to be as formulaic as possible. I'm amused at how many of them begin by quoting Elon Musk and Bill Gates.
https://docs.google.com/document/d/1aE9fgK5Uwsvd0-vkMoi5wm1opO_t2A4PwfdQ15w9sHg/edit?usp=sharing
GPT-3 writes its most humanlike text when it's given a highly-formulaic opening. It can kind of predict how an essay that starts by quoting Hawking on AI is going to go.
Change up the formula, though, and it's not so human anymore.
Change up the formula, though, and it's not so human anymore.
I think it's important to be as transparent as possible about how much human curation is behind most GPT-3 samples we see. I try to be clear in my posts about how I'm skipping the results that are boring, or are straight-up offensive (GPT-3 has Seen Things on the internet)
Neural nets like GPT-3 will get better at approximating run-of-the-mill cliches. What I'm interested in, though, is the inhumanly weird. The chainsaws made of living spiders. https://twitter.com/JanelleCShane/status/1303737845316833280?s=20