Yes
@OpenAI
new text-to-video is impressive but here's 5 questions that journos & the public should be asking--relentlessly.
1. When will you release the datasets used for training this system so we know whose data was captured w/o consent & in potential copyright violation?+
Finally shrinking the pile to the point where I can thoroughly read this interesting essay on "illusions of understanding in sci research" due to AI.
Folks like
@emollick
probably should read it (perhaps he already has). /1
Shocking that NATURE published this bunk.
Writing + illustration aren't outputs that can be isolated from the human contexts in which they become meaningful. Ppl aren't "text generators."
Here's what happens when you underfund HUMANITIES & reduce "ethics" to narrow metrics
Oh great, that preprint is now a Nature report🫠
Y'all, this makes no sense.
You simply can't compare the carbon emissions of people and objects.
An individual’s total carbon footprint estimate can't be attributed to their profession.
See my rant here:
None of us has read this yet but it seems intuitively right on onto-epistemic grounds. The world is too varied for reliable curve-fitting no matter how large the dataset.
As Shumailov et al note, "probable events poison reality."
More classically, "all models are wrong."
Folks, I'm concerned abt growing number of sources implying that chatbots & bot/search kluges like Copilot are appropriate for student research.
THIS IS FALSE & DANGEROUS /a
@OpenAI
3. What is the complete carbon footprint for the training and running of this model? How much energy and water and with impact on which parts of the world? How were the resources for chip-building maintained? +
@OpenAI
2. What were the conditions of labor for employing the human annotators who worked on this system. What were they paid? What toxic content were they exposed to at industrial scale and factory pace? +
@OpenAI
4. What is the plan for preventing this tool from being used for politically and personally harmful deep fakes--including non-consensual pornography.
What is OAI's plan to preclude harm from such fake content and to compensate victims for damages? +
@BBCTech
Newsflash: these folks are just hyping their products whether as doomers or boosters. Major distraction.
Climate change poses existential risk AND real harms every day of our lives. Cloud services now use more carbon than airline industry. Chatbots will make this worse.
@OpenAI
5. OAI's image models are already larded w/ bias: stereotypes about ppl of color, women, LGBTQ ppl, cultures/languages that are not well-represented on scraped internet data.
What's the plan for addressing that skewed vision of the world w/o further exploitation of human labor?
Wow, didn't expect this from Hinton who quite recently was still boosting. "Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work."
Note the details below. Google is paying a 5-figure bounty to news orgs that produce a fixed amount of AI-generated content. That's how desperate the industry is to promote "AI."
The nightmare begins — Google is incentivizing the production of AI-generated slop.
If you are a news outlet who has accepted this meager deal, and especially if you are publishing AI-generated articles without disclaimers, you should be deeply ashamed.
Must reading for those who recognize that for-profit AI is neither technically nor politically a sound recipe for student learning or democratic thriving.
Surprise, surprise, studies show that LLMs don't reduce doctor "burnout" and pose serious harms to patients.
Hardly a shocker, but always good to see the rubber of serious research hit the road of grandiose hype.
Once again
@nytimes
undermines its integrity as a newspaper of recording by providing misleading reports to the general pubic, complete w/ images of automated mannequins.
Educators, writers, artists, citizens, friends. Can we all agree that "AI" is a BS marketing term and that (at least as far as text-generating LLMs go) we should stick to "bot" to describe the commercial product du jour?
Another new low for academic publishing, courtesy of systems sometimes talked up as if they are major boons for authors who want help with their prose!
We have no desire to antagonize
@nytimes
's reporters but they repeatedly make rookie errors on AI. Below
@CadeMetz
writes: AI "may also generate false or misleading information, much like people do."
A bit like saying that Teslas sometimes bump into objects much like people do +
If anyone feels a little concerned about what comes out of this arrogant fella's piehole--esp where it concerns how "AI" will lift the poorest ("or whatever you want to call it")--please join us in a collective and interdisciplinary project of "debate and reconfiguration"
Sam Altman says increasing AI capabilities will require some change to the social contract, "the whole structure of society itself will be up for some degree of debate and reconfiguration"
MIT Tech Review _almost_ gets it right. "Amazing" is hype for a model we know nothing about(remember that Gemini clip that was heavily edited?).
Suggestion for revision:
OPENAI IS PROMOTING WHAT LOOKS TO BE AN IMPRESSIVE NEW GEN VIDEO MODEL. BUT LOTS OF UNASWERED QUESTIONS
Permission granted at long last to share this
#CriticalAIliteracy
document - Advice for the New Semester: briefly defines AI/LLMs, critAI literacy, ac integrity statements (suited to RU code of conduct) /1
So, some students use ChatGPT to write their term papers. AI is used to try to determine whether these papers were written by students or AI. Teachers then use ChatGPT to grade all the papers. For those students who did write the papers themselves, their
I'm unpleasantly surprised at how many students and faculty still seem to think ChatGPT searches and indexes all the internet's information in real time to produce its outputs, or works like an interactive and more sophisticated search engine to retrieve factual information.
Note how Witherspoon’s language mirrors exactly the sort that’s been posted by blue check AI boosters for months.
It’s become a mantra that’s exceptionally useful to AI companies and influencers—pay for AI services, read the newsletters, or lose your job! AI generated FOMO
No
@emollick
. YOU need to decide to remind yrself-every day & as often a possible-that yr experiences teaching at Wharton do not constitute you as an authority on the world's educators and learners.
Your lack of epistemic humility in speaking for "us" is breathtaking.
These folks want to take the current structures of education and overlay digital tutors. This would be a disaster. We should be taking the opportunity to rethink this from values in up.
@doctorow
@jathansadowski
@emilymbender
@chirag_shah
@TheAtlantic
"Google laid off 12,000 staffers to please a private-equity “activist investor” — in the same year, it declared a $70b stock buyback, extracting enough capital to pay those 12,000 Googlers’ salaries for the next 27 years. Google is a financial company with a sideline in adtech"💯
@Dorianlynskey
Have you listened to the interview? He's only saying that Corbyn won the biggest Labour swing in generations. I don't think Orwell would have been impressed by the false implication that Chomsky is a conspiracy theorist--or at the yelling at a calm 94-year-old.
#CriticalAI
invites you to contribute to
#AIHypeWallofShame
, a partnership w/ the
@DAIRInstitute
.
Our goal is to document AI-related media that falls into the hype trap, misleading the public.
Write to us at to share ideas!
There's plenty of published research to this effect. 18% of GPT-4 citations are fabricated and 24% contain errors. And that DOESN'T TAKE INTO THE ACCOUNT THE FACT THAT THESE MAY NOT EVEN BE THE CITATIONS FROM WHICH THE INFO IS DRAWN. /d
This isn't a test of ability to deliver babies, perform surgeries, or treat mental illness
Board scores concern general medical knowledge--which doctors look up in books and databases when they need them.
Please highlight the limitations of benchmarks when circulating
@emollick
Sorry
@chrmanning
, the below are dubious claims. LLMs DON'T provide "evidence" that the brain is computational. OTC, the fact that LLMs simulate human-like langu w/o human-like understanding suggests that if the brain is computational it must also be something else! /a
Re-upping a piece from last year by
@hamandcheese
on LLMs and language meaning:
“I see the success of LLMs as vindicating the use theory of meaning, especially when contrasted with the failure of symbolic approaches to natural language processing.”
I don't think the issue is that AI feedback is formulaic, but rather than it's a simulation of feedback. Because it's just a guess about what might be useful, it borders on gaslighting.
I'm a professional writer w/ yrs of experience; I've gotten some pretty strange AI feedback.
Sometimes people assume that AI writing feedback can only push students to conform to formulas (i.e. tell them a thesis is missing). But we can prompt AI to ask clarifying questions, offer encouragement, or point to ideas that could be further developed.
Maybe so; but
@OpenAI
's signature products aren't "creating " intelligence or longevity; are consuming HUGE amts of energy and water; and creating "abundance" only for
@sama
and a tiny elite of investors.
using technology to create abundance--intelligence, energy, longevity, whatever--will not solve all problems and will not magically make everyone happy.
but it is an unequivocally great thing to do, and expands our option space.
to me, it feels like a moral imperative.
#CriticalAI
is partnering w/
@DAIRInstitute
for a series on
#AIHype
(our new "wall of shame" is being reno'd). Would anyone like to take on the latest piece from the Atlantic (by Lowrey on everyone's favorite chatbot) for us? +
Please tell me that Google didn't actually use this image for introducing Gemini... And since it's apparently real, whose really hackneyed, condescending, and sad idea was that?
We need more positions like this one. We have no evidence that "AI" in the classroom does what its enthusiasts claim for students; we know it surveils them, makes errors, contains biases, normalizes etc. Thanks Ben Williamson and
@Bali_Maha
"What if, instead of being generative of educational transformations, AI in education proves to be degenerative—deteriorating rather than improving classroom practices, educational relations and wider systems of schooling?" 1/2
I teach
@Abebab
's work in my courses at both undergrad/graduate levels including her crucial research into misogyny, algorithmic colonization, and relational ethics. I've yet to teach anything by Ng who simply has no footprint whatever in in these social dimensions. /1
I hope ppl can understand how much the below rhetoric+media is a punch in the stomach to tech minorities. Reclaiming minorities' content as their own while poorly presenting the nuances. Glad for their Entirely New Idea of *articulating concrete scenarios*. Am furious.
Yet again,
@nytimes
shows need for an editor who understands AI!
Granted
@DouthatNYT
slants conservative.
But there's no sign that when he attributes Gemini's image problems to "ideological correctness" in Silicon Valley that he has any idea of earlier problems like these: /1
#CriticalAI
pleased to share a new article in
@PublicBooks
by Lauren Goodlad & Samuel Baker. We'll be tweeting out a few choice moments later but here it is for those eager to know why NOW THE HUMANITIES CAN DISRUPT "AI"
The "godfathers" are trying to silence dissent, but many are getting the memo any way. As several folks I admire have been saying for years, the real "AI" problem is concentration of power, not runaway robots. This good editorial in NYT makes the case.
@EdLatimore
@David_N_Frank
You can believe it b/c it gratifies yr core intuition: men are only interested in women for sex + the desire for sex is men's primary "ambition."
It's a cliche & gross simplification that could prevent non-erotic, non-transactional relationships w/ half the human species.
Could not agree more with
@RosenzweigJane
and
@BenPatrickWill
that the "inevitability" narrative is technodeterminism at its worst. Anyone who has read Watters's TEACHING MACHINES tour de force knows that educ hype has been rampant and unsuccessful since the 1920s! /1
Ever since I read this thoughtful piece earlier today, I've been thinking about how when ChatGPT was released, so many conversations about AI in education began with the premises that integrating AI in the classroom was inevitable, that students need to learn with it to be /1
Finally reading
@mer__edith
,
@davidthewid
, and
@sarahbmyers
recent article on the meaning of "open" in so-called "open" AI including "OpenAI"; link here.
Pausing to commend this comment on the term "AI"
In the effort curb misunderstanding and
#AIHype
on the topic of language models (LMs), we're circulating a tweet thread to offer a baseline understanding of how systems such as OpenAI's GPT-3 work to deliver sequences of human-like text in response to prompts. /1
Let's take this even further. "AI" = an invitation to fall asleep at the wheel. As we said in our recent thread abt publications in which authors included synthetic content: "This is your brain on auto-pilot."
(What the editors of those journals were doing is another matter...)
I am curious why it's important to have a statistical abstraction of training data on such questions as opposed to simply thinking this through for yourself. To me it seems like a recipe for average thought, as in "There 'AI' has now done it for me."
A favorite AI feedback prompt excerpt: "Help the writer see what ideas they might need to develop, explain, or further support... [D]escribe what questions a reader will likely have and what kind of further detail they might be interested in."
Reich's commentary on the shady billionaires now contemplating the potential purchase of TikTok.
His point: "The real issue here isn’t whether China or some American billionaire should own these platforms. It’s how to make them publicly responsible, regardless of who owns them."
This BLOOMBERG article got some attention last June but is worth reviewing for teaching
#CRITICALAILITERACIES
: it demonstrates how simple it is to probe text-to-image for their outrageous bias. /a
#CRITICALAI
excited to share a sneak preview of the TOC for our DATA WORLDS special issue. Ed. by
@KatherineBode
& Lauren Goodlad.
Review eds.
@dan_sinykin
& Praseeda Gopinath
They rightly center the problem of "epistemic risk" not only b/c LLMs are sometimes, wrong but because those who use them are prone to OVERESTIMATING THEIR UNDERSTANDING OF THE WORLD: producing more while understanding less. /4
GREAT to see so many educators taking an active stance on
#chatbots
and chucking the
#technodeterministic
passivity. 🧠
WOOT.
#CAI
's upcoming special issue should add to this emerging conversation.
But today I look
@lfurze
's most recent blogpost. /1
Completely agree - and now _everything_ has become AI. If you you have a digital sensor in your car that tells you when you're running out of gas, that's "AI"
The
#1
tech policy question right now is: Why are so many people who are generally brilliant on tech policy captured by the industry PR hype wave and talking about "AI" instead of talking specifically about LLMs, image generation, or whatever they specifically want to discuss?
Strong paper that urges us to get past the self-interested rhetoric on "openness" and demand actual transparency, reusability, extensibility. By
@davidthewid
@sarahbmyers
@mer__edith
Nice column from
@JuliaAngwin
following on the GPT-nothingburger that OAI launched on Monday.
Could not agree more that we should be investing our resources--in education & much else--in MUCH better ideas and technologies.
A new way to confuse gullible people into believing that a pre-trained model that guzzles energy/water is a good way to get information about their world...
People: you are still better off searching the internet.
ChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS & Android) and to include images in conversations (all platforms).
It is shockingly bad. Any librarian should quail, should renounce librarianship, before recommending this deceptive pseudo-research infrastructure to students or instructors doing research. Any instructor who recommends this tool is courting pedagogical malpractice. /g
That's why expecting a student to gather sources of information from bots is absurd. It implicates students' in the system's own non-attribution (a/k/a plagiarism) of actual sources in the training data, and in their echoing phony (& often made-up/incorrect) citations. /c
Does anyone else find that
@Microsoft
's OneDrive is insanely invasive in Windows 11?
One constantly needs to FIGHT AGAINST this creature in order to maintain an up-to-date hard drive.
Huge waste of my time and energy.
We are delighted to announce the first event of CriticalAI's inaugural event series: a keynote talk from
@mer__edith
on Friday, 2/12! The event is free and open to the public, and pre-registration is open!
None of us has read this study yet; nor have I seen any discussion of it, but it looks to me like an important finding. Thoughts from linguists and translation experts?
Bots CAN'T adduce the sources of info w/in their training data; so the "blurry" modeling of these (unknown) sources is what produce the outputs on which posthoc searches are conducted /b
And another one!
Folks, Critical AI has an editor who reads everything, a Managing editor who reads everything, and an excellent editor at
@DukePress
who reads everything.
We are not enshittified!
It's always good to see MIT
@techreview
reporting in a way that contributes to
#criticalAIliteracy
. in this instance by explaining what LLMs do and don't do. /1
@Dorianlynskey
Yes, but the "enormous victory" refers clearly to a "swing"/"gain"--taken out of context, including by you, & that's not cool no matter what one thinks of his position on other matters.
He may be sad & delusional on 1k other points & still NOT be on this particular claim. +
If software or hardware name contains the term neural, like neural processing unit (NPU), deep neural network (DNN) etc doesn't mean it is the same as the respective biological counterpart.
The misconception is common due to hype like this, courtesy of Samsung Electronics:
As promised, we're following up the pre-print of "Humanities in the Loop" w/ "DATA WORLDS: An introduction" co-authored by
@KatherineBode
& Lauren Goodlad. Both essays are from the forthcoming October 2023 issue of CRITICAL AI, a new
@DukePress
journal. /1
Must reading from
@_KarenHao
on the question of generative AI's environmental impact.
This computation-heavy technology is significantly more energy and water-intensive than the tools it aims to replace.
A big question looms over generative AI: what really is its impact on the environment? I spent months investigating a single campus of Microsoft data centers in the Arizona desert - designated in part for OpenAI - in an attempt to find out. Thread.
I grade too many AI submissions every day. It makes me more sad than angry. I'm working on ways to address it.
By using AI to think for us, we stagnate; there are no real new ideas and insights, just regurgitated, predictable reflections of our previous experiences and bias.
I've begun my own probing experiments on articles I know well, published after 2022. Copilot's info is shallow, pervasively wrong, or misleading.
It cites content it can't access as if it can.
It pretends it can help me to write a paper on something it cannot access /e
/1 As of June 30, 2023, 1/10 doctors use ChatGPT in daily work, and many patients self-diagnose using LLMs.
But a recent
@StanfordHAI
study shows that
#genAI
poorly substantiates medical claims even as
#FDA
struggles to regulate these errors.
Thanks to
@biblioracle
I'm looking at
@sama
's latest efforts to normalize his favorite product in its favored domain: education. While businesses understandably think 2x before shelling out for unreliable tech that could land them in court, what has higher ed got to lose?/1
As
@emilymbender
points out, "writing" is here abstracted from its primary function as a means of human communication & regarded instead from some Olympian remove as an activity that simply gets performed: whether by "AIs" or humans makes no difference. /1
I am contacting you because I am interested in discussing this subject with someone who is willing to listen. I have contacted Turnitin's support...and they refuse to provide me with any help. I initially took this up with my professor, however, he is unwilling to listen to me.
This is a disturbing article, worth circulating. But
@BBC
's definition of AI, which appears w/ the article, is one of the worst I've seen.
@alexhanna
,
@emilymbender
I'm guessing would call this
#AIHell
!
BBC - please get a scientific definition of "AI"!
Just thinking about how
@random_walker
's thoughtful meditation on the hasty sharing of a flawed paper on crowdworkers' use of LLMs to perform their "human" labor now precedes this weeks hyping of a deeply flawed, ill-conceived paper about LLMs passing MIT math tests at 100% /1
"If there isn’t enough pushback & soon...we’ll continue heading down a “hype-filled road" where "power is entrenched & naturalized under the guise of intelligence & we are surveilled to the point [of having]...little agency over our individual & collective lives.”
@mer__edith