New paper! We evaluated 14 models trained on 400M & 2B samples on classification using CFD as probe.
As training data increased, so did the probability of misclassifying human images as offensive classes such as criminal
Paper:
Code:
ChatGPT can now create Mind Maps.
No more wasting hundreds of hours making visuals for studying or simplifying complex ideas.
Here’s how to do it for free in a few seconds:
Reverse racism isn't a thing. If you think me pointing out racism in academia is racist, it's because white privilege has taught you not to recognize structural racism and discrimination. It's because your privilege rests on not understanding it.
message from my mum who lives in Ethiopia: URGENT, CALL ME!
me: what's it mum?
mum: I saw the floods on the news in Thailand. did you survive?
me: i live in Ireland
m:
me:
m:
me:
m: you know how similar the sound to normal people. don't you dare be disrespectful
me: sorry mum
"trustworthy", "smart" & "privilege" are not things that can be read off faces. this is nothing but a form of machine aided phsygnomy that will be used for insidious purposes which will end up harming those that don't fit social and historical stereotypes
longtermism might be one of the most influential ideologies that few people outside of elite universities & Silicon Valley ever heard abt. as a former longtermist, I have come to see this worldview as most dangerous secular belief system in the world today
I was in the process of being hired (part-time) at a Scandinavian university towards the end of last year & experienced one of the most distressing & draining cases of racism
(will not name the university, department or individuals to protect those that fought for me)
thread
1/
Today I asked a new cohort of about 30 ML PhD students to assume they were given these tasks by their employers and how they would respond. These questions raised heated debates 🔥🔥🔥 and the class went on way longer than anticipated
over a year ago, i commented on how big tech might take African language datasets, collected and curated by underfunded and overworked African researchers and i was dismissed as 'alarmist'
1/
How to make a racist AI without really trying
"... the sentiment is generally more positive for stereotypically-white names, & more negative for stereotypically-black names."
I urge you to read this if you do sentiment analysis of any sort.
3 weeks ago LAION-400M dataset (now a billion+), first Image-Alt-text pair dataset of this scale was released.
@vinayprabhu
,
@MannyKayy
& I dug into it
Long tread 1/
Warning: paper contains NSFW content that may be disturbing, distressing &/or offensive
📢New paper out in Artificial Life journal 📢
This paper is central to my PhD thesis, it brings together embodied & enactive cog sci & complex systems to argue that ML predictive systems are scientifically problematic and ethically dubious.
we do. we wrote about 2 major undersea cables in Africa owned by Google & Meta explaining how they 1) physically follow the transatlantic slave trade route & 2) ideologically constitute a new form of digital colonialism. our paper was rejected cuz it doesn't reference Western lit
My paper "Algorithmic Injustice: towards a relational ethics" just won the
@black_in_ai
Best Paper Award at
@NeurIPSConf
and I'm shook! Absolutely speechless!!!
#NeurIPS2019
me & my collaborators have done the most extensive research of the LAION datasets (3 academic papers & the first to investigate dataset in 2021 showing misogyny, pornography, & malignant stereotypes)
yet, the Stanford study has not cited us once. this is academic misconduct
big breaking news: LAION just removed its datasets, following a study from Stanford that found thousands of instances of suspected child sexual abuse material
Nothing scares me as much as seeing naive engineers with no knowledge of structural injustice, pervasive power asymmetries, or conservative and racist history of the field of AI, being endowed with the power to make tech that infiltrates the social sphere.
"Computational and cognitive sciences are built on a foundation of racism, sexism, colonialism, Anglo and Euro-centrism, white supremacy, and all intersections thereof"
New preprint from
@o_guest
and yours truly.
QUESTION: If you had the power to ban just one thing from cities in order to make them MUCH better, OTHER THAN CARS (too easy), what would that one thing be?
Every tech-evangelist:
#GPT3
provides deep nuanced viewpoint
Me: GPT-3, generate a philosophical text about Ethiopia
GPT-3 *spits out factually wrong and grossly racist text that portrays a tired and cliched Western perception of Ethiopia*
(ht
@vinayprabhu
)
The tech space is full of men who have no idea of the history of their field, leading to stupid comments like this... and every now and again, you have men like this who show the true colour of the field by saying things out loud
Some exciting personal news: it's now official that I will be joining the DeepMind ethics team in London as Ethics Research Intern in January for a number of months.
I'm thrilled to be selected as one of UN's inaugural AI Advisory Body to support the international community’s efforts to govern artificial intelligence.
Today,
@UnitedNations
Secretary-General
@antonioguterres
launched his AI Advisory Body.
The Body will focus on:
🔹Risks and challenges of AI
🔹 Enablers & opportunities for the
#SDGs
🔹 International governance of AI
Find out more :
Artificial Intelligence is one of the most powerful tools of our time, but to seize its opportunities, we must first mitigate its risks.
Today, I dropped by a meeting with AI leaders to touch on the importance of innovating responsibly and protecting people's rights and safety.
gentle reminder: all large language models are good at is predict the next word in a sequence based on previous words they've seen. that's all. there's no understanding of meaning whatsoever
do you know what's more dangerous than bad AI? the current culture that promotes blind faith in AI. people just lose their sense of reasoning and critical thinking when presented with claims of AI being able to do this or that, even the seemingly sensible ones
I'm begging tech bros to take basic lessons on human brains, behaviour and cognition.
When someone starts with "the brain is just...", you know it is a gross simplification but this one is catastrophically wrong too
"The vast majority of computer vision research leads to technology that surveils human beings, a new preprint study that analyzed more than 20,000 CV papers and 11,000 patents spanning three decades has found."
thanks
@404mediaco
for covering our paper
@bodamianrapsody
@evan_greer
"this is no different from someone seeing you from their front window" he says watching a video of a kid being broadcast to the world
Dear men of twitter,
This platform would be a much pleasant space if you could refrain from making irrelevant, condescending, and pointless responses to every tweet women post.
Thank you!
Preprint for our paper ‘Robot Right? Let’s Talk about Human Welfare Instead’ is up on arXiv .
@theblub
& I argue not just to deny robots ‘rights’, but to deny that robots are the kinds of things that could be granted rights in the first place. 1/
If senior professors in position of power can outright attack me like this and alter my career paths, just imagine the trauma, obstacles and frustration other (less outspoken/those who prefer to remain quiet) Black scholars go through as they navigate academia.
End/
Google has agreed to settle a $5bn class action lawsuit for invading the privacy of users by tracking them even when they were browsing in "private mode".
You might think it's just one person, but he is a symptom of much larger rotten ecology that is academia, esp historically white dominated depts
Given the powerful position he holds, it pains me to think that things are unlikely to change & the environment will remain toxic
10/
it's astounding how much of current "algorithmic (un)fairness" literature assumes there's an uncontested "fair" dataset/representation/ground truth out there
let's ditch the common narrative that "AI is a tool that promotes and enhances human prosperity" (whatever that means) & start with the assumption that AI is a tool that exacerbates inequality & injustice & harms the most marginalized unless people actively make sure it doesn't
Dear the ML/AI community,
DO NOT research autism without autistic people
DO NOT research disability without disabled people
DO NOT research race without people from historically
marginalized races
DO NOT research gender without intersectional feminist theories
Thank you!
just processing the news that our paper "The Values Encoded in Machine Learning" has been awarded the best paper at
#FAccT22
and absolutely speechless
grateful for the team 💜
there is no such thing as brain correlates of homosexuality. this is unscientific with a disastrous implication in countries where homosexuality is illegal.
just let people be or let people identify their own sexuality
"aligning AI with human values" often amounts to "aligning AI with the values of the status quo" (certainly not with the values of the most impacted by AI) so long as we fail to scrutinize which humans we are talking about
the AI industry is destroying the environment (massive energy consumptions, tons of gallons of water for data centres) while systemically concealing such information. I'm grateful for journalists like
@_KarenHao
doing the crucial investigative work. excellent thread
A big question looms over generative AI: what really is its impact on the environment? I spent months investigating a single campus of Microsoft data centers in the Arizona desert - designated in part for OpenAI - in an attempt to find out. Thread.
Petition to replace "bias" with "harm", "injustice", "oppression" or other appropriate terms that reflect the depth of these problems in algorithmic systems.
@ylecun
Pretty much exactly what happened was you overhyped and released a model. People tested it (you should be thankful for us as it's your job to do this before release) and demonstrated that not only does it fail to stand up to your hype, but it is also dangerous.
I repeat: let the tech industry establish guardrails and self-regulate is equivalent to expecting the tobacco industry to do cancer research and self-regulate
WATCH: Former Google CEO
@ericschmidt
tells
#MTP
Reports the companies developing AI should be the ones to establish industry guardrails — not policy makers.
“There’s no way a non-industry person can understand what’s possible.”
Addressing the legacy of eugenics in statistics will require asking many such difficult questions. Pretending to answer them under a veil of objectivity serves to dehumanize like the rhetoric of eugenics that facilitated practices like forced sterilization
a robot does not *wake up*. a robot does not take a *deep breath*. it is a machine that depends on human labour through and through. stop spreading misinformation
the push for 'open sourcing' without appropriate regulatory and structural safeguard will only benefit the already powerful and in possession of resources
end/
Just like that, Google lost the little legitimacy it had.
@timnitGebru
has been nothing but the ideal role model for Black women, an aspiration, and a towering figure that keeps pushing the whole field of AI ethics to respectable standard.
The prof's actions constitute racial discrimination, & the department opened an investigation, which has now come to a conclusion.
While I suffered emotional distress & my career was derailed due to his action, the only consequences he faces is he's sent to take ‘trainings’.
8/
Academia: academic writing is dry and full of jargon. This needs to change
*writes in a directed and relatively jargon free manner*
Academia: sorry, your writing doesn't follow proper format and reads more like an essay than a paper. Go back and rewrite.
"a model called Voice Engine, which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker" the ONLY thing this enables is fraud at a mass scale
We're sharing our learnings from a small-scale preview of Voice Engine, a model which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker.
this is a great example of how ML really is nothing more than pattern finding in data
AI doesn't (and can't) make sense of the social world the way humans do and any patterns picked could be these silly details than anything fundamental about a behaviour we are modelling
New paper!📢
On Hate Scaling Laws for Data-Swamps with
@vinayprabhu
, Sang Han &
@VishnuBoddeti
Paper:
Code:
WARNING: Contains examples of hateful text & NSFW images that might be disturbing, distressing, &/or offensive
Long 🧵
1/