Explanation. The article used GPT-3, OpenAI's language generator. The GPT-3 system is powered by text (it takes in a prompt) and complements that with a new word. The system is a very good predictor of the next word. It's essentially a super advanced auto-complete system.
What bothers me most about the article is that the newspaper intentionally seems to forget that many people are sloppy, hurried, lazy readers. They read the headline of the article, the article itself and not the italics.
The article wrongfully raises high expectations about artificial intelligent software. And images about AI with 'reflective capabilities'. (See Twitter comments)
The explanation of the role of human journalists in the creation of this article is subtly hidden in the last lines of a italicized text at the bottom of the page. Of course, almost nobody reads that. Judging from the reactions on twitter, I think I'm right.
The newspaper should have brought out more strongly how the article also has been constructed by humans.
The short, decent summary of the 'editor’s note' had to be in the top section of the article.
What bothers me. First: the title is misleading. "A robot wrote this entire article." That is not correct. 50 (coherent) words were written by a journalist. And it's no surprise that these 50 words are now frequently quoted on Twitter..
Second: GPT-3 created eight (!) different essays. The Guardian journalists picked the best parts of each essay (!). After this manual selection they edited the article into a coherent article. That is not the same as: "this artificial intelligent system wrote this entire article"
Third, but that's personal taste; the newspaper could have chosen a subject other than 'the dominance of artificial intelligence on humans'.
Now the subject of the article triggers the Hollywood-ish fear that this kind of software can reflect on the relationship between people and AI. Not true.
This article is harmful to the field of artificial intelligence because it raises expectations that are not yet accurate. This ultimately leads to disappointment because software is not yet able to deliver what is expected. (And it also evokes anxious feelings).
Don't get me wrong. I am very enthusiastic about the possibilities of synthetic media (+generative software). I wrote a report in 2019 with the title: "machines with imagination" (about generative software and GAN's) and a report about Deepfake technology. (Both in Dutch, sorry).
I also get excited about what's possible. But let's work together to keep an accurate portrayal of the capabilities, the inabilities, the opportunities and the threats of AI and generative software.
You can follow @JarnoDuursma.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.