Here is the corrected link. They seem to have moved it:

https://twitter.com/_jack_poulson/status/1353714228826402818
Schmidt is giving intro.
Paraphrasing: 'There are three big themes [for the next few days]: First we need to organize...for the competition between...institutions...including serious investments for America's digital infrastructure.'
'We *must* field AI technologies for defense purposes...For the first time the commercial sector is better equipped than the government.'
'AI does not have an ideology, but it *does* matter who deploys it...We must stand against authoritarian models of AI surveillance...The United States can deploy AI in a manner consistent with individual values...The 'American Way' if you will'
Schmidt just handed things over to the co-chair, Former Deputy Secretary of Defense Bob Work and Principal at WestExec Advisors (and a member of several corporate boards).
Work is largely framing the next two days as an overview of the report that EPIC won the early release of.

See https://twitter.com/johndavisson/status/1352267884614082562
Seth Center is now giving an overview of the Key Judgements in the Introduction.

https://www.csis.org/news/seth-center-joins-csis-senior-fellow-and-director-project-history-and-strategy
China has already been mentioned numerous times -- I wonder if anyone will mention ethics issues relating to deployment of US military AI to the Kingdom of Saudi Arabia, the UAE, or even "the Middle East" writ large.

(I strongly doubt they will.)
Seth Center just handed things over to Mike Garris of NIST (still discussing the intro key judgements).

https://www.nist.gov/blogs/taking-measure/authors/michael-garris
Eric just emphasized that the commission should say that 'The United States is ahead [of China on military AI], but that China is close behind.'
Andy Jassy -- head of AWS -- is now speaking about the 'barriers to entry [being] so much lower' for AI...'not just for China and Russia...but other countries as well'.
Eric Horvitz of Microsoft agrees with Jassy and says that we are only seeing 'the tip of the iceberg' in the usage of AI for cyber operations.
Katrina McFarland -- board member of SAIC -- describes the current AI competition as 'invisible'. Former FCC commissioner Mignon Clyburn agrees with the 'invisible' nature.
William Mark of SRI International emphasizes how quickly AI is currently developing.

https://www.nscai.gov/about/commissioners/mark
Schmidt just emphasized that AI is fundamentally different because of how easily it 'diffuses' to malcontents, and that the current draft doesn't say this clearly enough.
Chris Darby -- CEO of In-Q-Tel -- is now giving an overview of the 'Emerging Threats' portion of the report.

(For some reason they aren't showing his video feed.)

https://www.iqt.org/about-iqt/#modal-bio-b075e03a-2ba6-49a8-9294-01ba25ca388a
Darby just described the 'broad circulation of personal information' as 'driving commercial innovation' in addition to national security risks.
Darby is now getting into adversarial attacks on US AI systems and calls for government-wide "red teams" to "harden" our own AI.
"Biotech can also have a dark side...the government should increase the profile of...biosecurity...and update the national biodefense strategy to include a wider vision of biothreats...I'm sure Eric Lander will echo those sentiments"
Eric asks Darby to talk about the biothreats in an "unclassified setting".
Horvitz adds onto the biothreats discussion by saying that AI is now a 'big part' of driving CRISPR precision.
Katrina McFarland takes a 'positive' spin by emphasizing that biotech can also help protect us from threats.
Gilman Louie -- former CEO of In-Q-Tel -- just made the second reference to the Solar Winds hack and suggests AI and ML could help with the defense.

https://www.nscai.gov/about/commissioners/louie
Clyburn emphasizes that the US *will* live its values but that we should be prepared to uphold our values in the face of nations who do not.

I can't help but wonder if the commission belives the Kingdom of Saudi Arabia and the UAE "uphold US values".
I think it was Darby who -- off video -- dropped the buzzword of asking how much AI can accelerate the time from "bug to drug".
Steve Chien of JPL argues that "deterrence" is much more effective than defense in cybersecurity due to the imbalance.

What constitutes deterrence -- e.g., hacking forward -- seems like a can of worms given that the US believes only it has the right to do so.
Oracle CEO Safra Catz is now giving an overview of the "Foundations of Future Defense" and is emphasizing that the US maintaining its lead over China requires accelerating the adoption of commercial technologies and further funding AI R&D.
Safra handing things back to Clyburn, who is advocating for a "Goldwater-Nichols Act 2.0".

Clyburn then handed back to Safra, who his handing over to Andrew Moore.

https://en.wikipedia.org/wiki/Goldwater%E2%80%93Nichols_Act
Andrew Moore -- the director of Google Cloud AI -- emphasizes the importance of "Digital Native" companies who incorporate AI into all of their activities.

Presumably Moore is including his own company in this compliment.

https://www.nscai.gov/about/commissioners/moore
McFarland is now emphasizing the necessity for change in the adoption of AI technologies.

She argues resistance to change -- including the people -- are current barriers to adoption of new technology.

https://investors.saic.com/corporate-governance/board-of-directors/person-details/default.aspx?ItemId=fda2a78c-5732-4408-a437-d99a232602cb
McFarland describes "malicious obedience" as a process of resisting change by precisely following requests.

Presumably this is the same as "work to rule"? Kind of rad to hear this mentioned by the commission.

https://en.wikipedia.org/wiki/Malicious_compliance
Gilman Louie says we don't have time to wait for integration and "must disrupt ourselves".

I don't think that phrase could sound any more Silicon Valley meets government.
Catz (Oracle) replies to Moore (Google Cloud) and argues DoD may not have the technical foundation to start incorporating AI.

She then argues "many adversaries" don't have "public/private silo" that US has.

(Interesting framing given immense degree US gov is privatized.)
Yll answered a wonky question about funding from a YouTube comment from someone named Andrew then handed things over to Work (of WestExec Advisors) to discuss Chapter 3: AI and Warfare.
Work says that China refers to AI warfare as 'intelligentized warfare'. (Not sure why that matters?)
McFarland emphasizes the need for "creating a standard [on AI technology]" before "others reach the table" is critical.

Louie underlines Work's mention of need for AI to accelerate OODA loop so the US military can operate "at machine speed". Presumably this is hinting at JADC2.
McFarland argues that industry (presumably including SAIC, which she is on the board of) views the world as global and so they should "come forward" to help lead the conversation on how the US can work with allied nations on AI.
Andrew Moore -- head of Google Cloud AI -- seconds McFarland's (SAIC) recommendation that industry should help lead.

He argues that industry will be particularly useful at accelerating the OODA loop.
Bob Work (WestExec Advisors) is now giving an overview of Chapter 4: Autonomous Weapons Systems
(CC: @BanKillerRobots)
Work mentions that nations "large and small" are pursuing AI-enabled weapons systems.

Given the central role of drones in the recent Azerbaijan/Armenia conflict, this seems like a valid point.

As is the for-profit export of such weapons systems by the US and its allies.
Work is "very confident in the *extensive* procedures put in place by the United States...but can find little evidence that US adversaries" will have responsibly designed and lawfully used AI weapons.
"Such efforts should and must be led by the United States"
Work argues that the US should publicly clarify that only humans can deploy its nuclear weapons systems and seek for Russia and others to make similar public commitments.
I can't help but notice the contradiction of a committee largely composed of US technology executives setting policy grounded in the assertion that the US is uniquely ethical.

The actions of these very execs represent relentless drive for profit at expense of ethics.
Horvitz (Microsoft) is adding one of the final comments on his concern for "unintended escalations" resulting from AI enabled systems "in times of stress".

He argues that the US has been pushing for dialogue with China on these issues but they have not engaged.
Schmidt (former Google CEO) adds onto Louie's (former In-Q-Tel CEO) call for constant testing and validation of AI systems by referring to "emerging behavior" that requires continual testing of integrated systems.
Louie is now answering a question about the precedent set by landmines for autonomous systems. He argues that the US seeks to eliminate indiscriminate autonomous systems through the use of AI and ML.
Andrew Moore -- head of Google Cloud AI -- is saying there will not be "big black box" reinforcement learning systems because, if the system shoots something down, then there needs to be an explanation for why.

(This seems to be referring to DARPA's third wave AI push)
Work (WestExec) is circling back on the landmines question and echoing Louie's verbage that the problem with them is their 'indiscriminate' nature due to lack of 'target identification'.

Presumably they want to add AI (facial rec?) to landmines to "reduce collateral damage"
Work argues that it is a 'moral imperative' to investigate the integration of AI into such weapons systems.
Horvitz (Microsoft) argues that all AI weapons systems will be "human accountable".

It begs the question of 'who?'.
Jason Matheny of CSET is now giving an overview of Chapter 5.

https://www.nscai.gov/about/commissioners/matheny
Matheny emphasizes that the barrier for highly paid AI engineers at major tech companies transitioning into developing military AI for the government is not so much pay as it is the year it takes to get the security clearance.
Matheny circles back to arguing for an "aggressive" pursuance of Top Secret and above security clearance reform.
Schmidt (former Google CEO) argues for immediately moving as much national security AI work to the cloud as possible.

Presumably that includes his own company's cloud.
Louie (former In-Q-Tel CEO) argues that "adversaries are filling our coffers with disinformation" and that AI systems for detecting it are critical.
Andy Jassy (head of AWS) argues that US forces could develop much more effectively by using a combined system.

Presumably he means through AWS.
Louie (former In-Q-Tel CEO) argues that senior leadership in the IC need to rethink tradecraft for how AI & ML will be a core component.

Until then "we will not move the ball forward".

McFarland (SAIC board) strongly agrees. "It is an adoption and adaption problem"
Matheny (CSET) is answering question about automation bias by saying national security staff sometimes undertrusts AI, and this is part of motivation for explainable AI.

Matheny echoes Horvitz's (MS) suggestion of 'scorecards' for AI models to give a sense of where to trust them
"Getting 17 agencies to act in unison is challenging..."
Work is circling back to the OODA loop discussion yet again.

"Much of the discussion on AI-enabled decisions goes towards decision-making tools...Boyd saw...orientation as the most important step." And the goal was to make the adversary's perceived reality diverge from actuality
Work argues that AI systems are going to make mistakes, but that they will be more accurate than humans.
Bob Work (WestExec Advisors):
"The IC community is the place where AI can probably help the most right now. And doing it fast is important".
Chapter 6 is shifted to tomorrow (starting 3pm ET), and Horvitz (Microsoft) is now giving the overview of Chapter 7 on establishing confidence in AI.
One can't help but notice that the term "whistleblower protections" is not mentioned in the "Accountability & Governance" portion.

The DoD has repeatedly shown that it punishes whistleblowers and there is serious reason to distrust these formal mechanisms for "raising concerns".
Matheny (CSET) just interjected to give Eric Schmidt (Google) a compliment for his work on the report.
Horvitz (Microsoft) just described the DoD AI Principles as "high level" "aspirations" and emphasized how closely he has worked with the JAIC and DIB.

He hopes that Chapters 7 and 8 actually "get into the details".
As a refresher, here are Chapters 7 and 8's title, and here is a link to the report obtained a week early by @EPICprivacy

https://drive.google.com/file/d/1XT1-vygq8TNwP3I-ljMkP9_MqYh-ycAk/view
Horvitz (Microsoft) is now giving an overview of the second "get into the details" of the AI Principles chapter, which focuses on civil liberties.
I asked a question in the chat -- 200 characters wasn't nearly enough.
Andrew Moore (head of Google AI Cloud) makes an important point that we must avoid principles which allow for games being played to prevent ethics principles from applying on the grounds of whether or not they count as AI.

Which is interesting, bc Google has done exactly that!
(More specifically, Google leadership -- namely Jeff Dean -- directly argued to me that the company's AI Principles did not apply to Search because it is a product and not AI research. Nevermind that Search heavily uses AI.)
Yll is now answering the following questions (including mine).
Or, rather, Yll picked the questions and Horvitz is answering them.
Horvitz says that the committee had a discussion "just last week" relating to my question on X-Mode et al. He says they looked very closely at the Carpenter Supreme Court decision.
Horvitz says it is not clear how courts will apply Carpenter for private commercial data that reveals the locations of US persons. "It is a discussion area."

"This will be an ongoing discussion in our society...I expect this to get up to the Supreme Court."
So, in summary, Horvitz dodged the actual question on the application of the AI Principles commitment to auditable data and fell back to legality.

So, pray tell, what is the point of the AI Principles?
The discussion has now moved on to Chapter 9: A Strategy for Cooperation and Competition.

I had to step out for a second but believe -- based on the voice and lack of video -- that this is In-Q-Tel CEO Chris Darby.
A lot of issues are being punted to tomorrow's meeting. Matheny just quickly responded to this question.
We are three hours in and I have to stop the live-tweeting marathon to take a phone call.

I'm still thrown off that direct question on application of the AI Principles was instead answered with 'the Supreme Court will decide whether it is okay' and no mention of use in mean time
The only conclusion can be that the AI Principles exist entirely as a means of asserting that the DoD is ethical to help convince tech workers not to dissent, and not to actually allow for litigation of clear violations.
Everything just wrapped up anyway and picks up tomorrow at 3pm ET.
You can follow @_jack_poulson.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.