Houjun Liu

PWR1 Rhetorical Analysis Essay Planning

General Information

Due DateTopicImportant Documents
Sunday, Jan 21AI AlignmentYejin Choi Talk

Claim Synthesis

Quotes Bin

Double entendre which frames the study of AI safety as “intellectual” (in contrast to War)

“And then there are these additional intellectual questions. Can AI, without robust common sense, be truly safe for humanity?”

double intentre: “intellectual” as in interesting but also “intellectual” as in worth asking

Language of Extremes: AI safety is a “bruce force” problem which requires “extreme scale”

And is brute-force scale really the only way and even the correct way to teach AI? So I’m often asked these days whether it’s even feasible to do any meaningful research without extreme-scale compute.

Framing AI safety as a war, us-them dynamic between you and AI

“Perhaps we can … seek inspiration from an old-time classic, “The Art of War,” which tells us, in my interpretation, know your enemy, choose your battles, and innovate your weapons.”

Importing the language of war—raising urgency and stakes. Animating: more strongly dynamic.

Adding scale-optimists to be a part of the antagonized group

“Some scale optimists might say, “Don’t worry about this. All of these can be easily fixed by adding similar examples as yet more training data for AI." But the real question is this. Why should we even do that?”

contrast: some … we—us them dynamic

We can’t dissect AI models because of big tech

“we are now at the mercy of those few tech companies because researchers in the larger community do not have the means to truly inspect and dissect these models. "

Language of extremes in both directions, taking agency out of the AI too

“AI today is unbelievably intelligent and then shockingly stupid.” (pdf) phrasing: contrast

adv adj and then adv adj

Raises stakes: uses the language of extremes

Passive voice takes agency out of “us” and into the hands of the enemy

“We’ve been told that it’s a research topic of ’70s and ’80s; shouldn’t work on it because it will never work; in fact, don’t even say the word to be taken seriously.” (pdf) dropping subject; “we”—raises urgency by highlighting contrast

Calls to action; parallel structure associates “we” for the first time with, :shocked picachu face: organizations

“We don’t know what’s in this, but it should be open and publicly available so that we can inspect and ensure [it supports] diverse norms and values. … So for this reason, my teams at UW and AI2 have been working on commonsense knowledge graphs ” (pdf) repeated usage of “we”—bilding a tride

Zoomorphise AI as a new “species”, in effect animating it

“We’re now entering a new era in which AI is almost like a new intellectual species with unique strengths and weaknesses compared to humans. In order to make this powerful AI sustainable and humanistic, we need to teach AI common sense, norms and values.” (pdf) anthropormorphising

Giving AI agency, using the second person “singular” makes the killing feel more personal

“And that AI decided to kill humans to utilize them as additional resources, to turn you into paper clips.” (pdf) humans … you—increase urgency

Sub-Claim Development

Victimze AI: AI is a new species with blunt force that’s teachable

AI has extreme raw power, but is somewhat of an anthropomorphizing new “species”, which “we” can “teach”. This breaks the audiences’ traditional framing of AI as a blind tool, and empowers the audience to imagine what “teaching it” looks like. However, Choi uses the language of extremes in both directions, framing AI as “stupid” to victimize it. [She spent much of the talk giving copious examples]. This places the audience at a position of power, wanting to help the subjugated AI.

Victimize Audience wanting help AI: pitting the audience against the AI deveolpers

After victimizing AI and inspiring audience to teach it, Choi then victimizes audience wanting to help. Choi introduces that big tech disenfranchises the audience’s goals of teaching “AI”, who we are at the “mercy of”. Establishes an us-them dynamic that Choi continues to develop: passive voice takes agency out of “us” and into the hands of the enemy, objectifying it, further establishing and antagonizing the two groups and highlighting how the audience, and by proxy Choi, is disenfranchised.

We are at war, and Choi is the savior

This two-sided dynamic comes to a head with the language of war. AI can kill “you”-the second person “singular” makes the killing feel more personal, further justifying the “war”. With a parallel structure, Choi frames herself and her research as the leader of the “audience’s side”, wading into the unknown in this war. She frames this war as an intellectual one, which she has the credibility ethos to lead.

The Claim

Choi uses the language of antagonism to victimize AI language models and, by proxy, the audience wanting the help the victimized AI—placing both groups at a “war” of disenfranchisement against big-tech model developers. This dynamic allowing Choi to justify her research as the intellectual savior which empowers the fight of humanity in this “war”.