#coling2020
Should #GPT3 have the right to Free Speech?

Panel with Robert Dale, Emily M. Bender ( @emilymbender), Pascale Fung ( @pascalefung), Christopher Potts ( @ChrisGPotts)

Best attempt at summary, corrections welcome.
/1
Intro remarks:

Bender:
- the central question of the panel is misguided. And the discussion about robot rights distracts from real issues [REF I DIDN'T GET].
- is it too late to do anything? No. In the real world things don't stay "solved".
/2
Bender: The main risk of GPT3 is the ability to generate seemingly coherent text that could look like it's been said by someone who takes responsibility for its trustworthiness. We are already struggling to find reliable information, and this will make things worse.
/3
Fung:
- we need regulation, but it's more productive to target application rather than development;
- individual researchers are free to choose whether to work on these things, and if they choose to, they should be free to voice concerns and try to improve things.
/4
Potts:
- GPT3 is just the first of many future technologies which will have much the same issues, so the current discussion is important in that it starts to test how the current law and social norms are/will be interpreted/provided.
/5
Q: What regulations should exist?
Fung: we don't really have any AI regulations yet. There are some around army use that could serve as a starter
Bender: need regulation for transparency wrt training data, and preventing machine-generated output from being presented as organic
/6
Fung: regulations shouldn't "overfit". Not to stifle progress, not too narrowly applicable.
Potts: social norms might *evolve* in the direction of regulation, so it's really important to get them right.
/7
Q: There will always be bad actors. How can we educate public about AI issues?
Potts, Fung: need more outreach, proactive advocacy
Bender: it's really a civic duty for all of us, we need to be prepared to do it even though it costs us some research time.
/8
Q: Is GPT3 a seismic shift in ethical AI, or a next year it'll pale in comparison with something else?
Bender: Ethical AI has been happening for a few years and was always centered on big data. GPT3 just landed in the middle of it with lots of PR.
/9
Q: How to temper the hype?
Fung: need to educate the media, the journalists/science writers. Journalists seek drama, stories to sell. We need to learn how to talk to them. Do NOT delegate that to university PR team. Precedents for hype from misrepresentation by them.
/10
Q: Who should train the researchers to talk to the media?
Bender: ACL could do at least something. This year there was an initiative to connect the authors with the journalists, this should be continued. Also, we could use a tutorial.
/11
Final words:
Bender: What is called "AI" is really pattern-matching.
Potts: We're having real impact => this is no longer an academic exercise, need more self-reflection.
Fung: We don't need "human-level" AI, we need it to be better: more transparent, diverse, inclusive.
/12
You can follow @annargrs.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.