"[A]ny factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition."
Who here realises the ENORMOUS privilege that the use of English as an international science & publishing language conjures on native English speakers in academia?
"What's your degree in?"
"Ecology."
"Oh, so you, like, hack through the rain forest with a machete and look at rare animals and plants?"
"No... I sit on my ass and stare at a computer screen all day, writing code and crunching numbers while wearing a pith helmet."
My 11-yr-old son asked to post this poll as a science experiment. He is very interested in this question & hopes to have reliable data. Can you please RT? He would be very grateful.
Do you think a human brain can think, in principle, a finite or an infinite number of thoughts?
Several male scientists have asked recently what they can do to be better allies for women in science. I’m making this thread to collect possible answers & examples. If you have tips, advice, requests, examples etc. please feel free to add to this thread (or @ me & I’ll add it).
If you are using LLMs for summarizing long docs, you really should read this paper
Over 50% of book summaries (incl by Claude Opus and GPT-4) were identified as containing factual errors and errors of omission
Lesson: don't blindly assume AI summarization tools work. Test them.
If you don't agree that AGI is coming soon, you need to explain why your views are more informed than expert AI researchers. The experts might be wrong -- but it's irrational for you to assert with confidence that you know better than them.
To senior academics who only follow & interact w/ people they consider ‘academic peers’, I’d like to say:
I’m not impressed. You could try be less arrogant & perhaps learn new perspectives if you’d acknowledge the valuable contributions here from junior & marginalized people.
When do we stop falling for AI hype and wasting resources? We know these systems fundamentally cannot scale functionally without consuming astronomical resources. Can we just stop and start thinking carefully for ourselves again?
Sam Altman wants up to $7 trillion for AI chips. The natural resources required would be 'mind boggling,'
@SashaMTL
told me. "Even if the energy is renewable (which it isn't guaranteed to be), the water and rare earth minerals required is astronomical."
@flownepp
@IrisVanRooij
Take a stand-up comedian, who gets all their material from observing everyday live. None of it is credited, and that’s fine. That’s how AI works.
Omg, the quote tweets of this 😅
Yes people, I am a computational cognitive scientist and may have some expertise in “understanding”, I do not know, maybe because we study it and know it has no tractable computational characterization yet 😎
🚨 Excited to release a "living" version of the OPEN and interactive textbook "Theoretical Modeling for Cognitive Science and Psychology", by
@MarkBlokpoel
and myself. Check it out! And share widely. 1/n 🧵👇
There is no such thing as “the” bias in face processing. The authors are deciding to automate *someone’s* bias & we can all see whose bias that is. Surprised by enthusiastic responses to this problematic work. I assume the rest of us may be too appalled to engage (I was, too)
I cannot believe I am reading this? Here is a professor defending harassment of butch lesbians in public toilets because that is a normal price to pay for her and others' transphobia.
Announcement: I've accepted a new role as Senior Editor for the journal Cognitive Science. Delighted to be working with the new Executive Editor Rick Dale & colleague Senior & Associate Editors & Board. Be sure to submit your best multidisciplinary work in CogSci to the journal!
“I think it’s stunning that someone would say that harms [from AI] happening now—which are felt most acutely by [the] historically minoritized: Black people, women, disabled people, precarious workers, etc—that those harms aren’t existential.”
@mer__edith
Seen the latest physiognomy paper in PNAS?
Some people felt the criticism was superficial or the outrage unjustified. Some claimed the paper's goal was not to do bad, but to do good.
Well, I read the paper (and the patent application).
Let me tell you what I found.
THREAD
New research shows training LLMs on exponentially more data will yield only linear gains. So as Silicon Valley seeks ever more data, compute, energy and human works for AI systems, the improvements will be marginal at best. Something tells me this new info isn't going to stop it.
To academics, who feel entitled to “intellectual debates” when colleagues express pain about their marginalisation or sexual harassment, I’d like to say:
You aren’t as smart as you think you are & you‘re part of the problem.
Large language models cannot read, and because of this, we should not have them respond to student writing. If we are impressed with LLM feedback, we should rethink what kind of feedback is being given.
I guess, a bit naive to think there'll be no spill over from a profile update on LinkedIn to Twitter 🙂 but then I might as well update here too: Happy to share the news that I've been promoted to Professor of Computational Cognitive Science
@Radboud_Uni
@DondersInst
@AI_Radboud
Seen the Nature Comms paper that caused outrage? Did you also think: “this is not physiognomy, just an algorithm mimicking human biases; surely there is no intent to use that algorithm for other purposes"?
This came out yesterday (h/t
@zeyneparsel
) 1/n
I share this fear. I am deeply worried. As I and others have written here , I believe that current industry-driven ML/AI-as-engineering is infiltrating our science and deteriorating our understanding of cognition and of ourselves as human beings.
I have fear that corporate greed is corrupting our academic discipline (ML/AI). I don't know which actions to take that are not just performative. I imagine many others have thought about this much more than I have, so if you have good ideas, please share. Here are a few: 🧵
Yes, using ChatGPT to generate ideas and texts for essays, articles, books, etc. is a form of (automated) plagiarism No, one cannot credit ChatGPT or OpenAI, they stole the ideas and texts from (uncredited) authors
@provisionalidea
@APA
@interacciones
"Does using AI-generated text constitute plagiarism? Should authors who use ChatGPT credit ChatGPT or OpenAI in their byline? What are the copyright implications?"
>>
It bears repeating that treating women and men equally poorly is not non-sexist in a world where women and men cannot respond to your maltreatment in the same way without differential consequences.
Mentally preparing a class on "Cognitive Science and AI" for 1st year AI students and looking for examples of how knowledge of human cognition can help curb over-hyped AI claims of "sentience" or "human-level AI". A 🧵with some ideas I have so far. What else would you suggest?
Really, cogsci colleagues, help people resist this nonsense & hype by inserting knowledge & critical perspectives. This whole "AGI is coming soon" is just to distract from the real-world harm of the tech and to keep pouring in money where it should not go
📢 Are you a psychologist? Would you like to develop theory? Wonder where to start? 👇
Check out new preprint by
@giosuebaggio
& myself: "Theory before the test: How to build high-verisimilitude explanatory theories in psychological science" 1/n
Well done. Happy to see the updated response. The earlier statement was indeed insensitive. It is heartening to see that people can learn and change perspective on these issues.
Others also annoyed by this platitude?
"ChatGPT/LLMs/etc are here to stay!"
and then:
"so, we must embrace them!"
As if things that are here to stay are be definition "good".
I'm afraid crime, illness, climate change are all here to stay (sadly). Embrace them?
Updated (final) version of "How hard is cognitive science?", by Patricia Rich, Ronald de Haan, Todd Wareham, and myself, is now available. Let me make a thread with highlights for the occasion 🧵👇 1/n
It’s amazing to me that people think using AI can only mean ceding control and giving up your own thinking. Bouncing ideas off of anything highly responsive can help a person clarify and extend their own thinking.
Now compare the simplicity of this system to the complexity of a human being, and then think about the consequences of making predictability of behaviour the cornerstone of ones theories ...
Very unwise to use LLMs for this. Use instead a carefully designed algorithm that is developed for this specific purpose with a transparant specification that is legally validated and provably correct.
📢 New preprint: "How hard is cognitive science?" - by Patricia Rich, Ronald de Haan, Todd Wareham & myself.
This version is accepted for
#cogsci2021
, but we're revising based on the reviews (until May 11). Any comments, questions, or feedback welcome!
The concept of ‘abduction’ came up in twitter discussion and people asked what it means. This may be a good occasion for a short thread on a recent paper about the challenges in characterising ‘abduction proper’. 1/n
I used to work at Technical University & my postdoc mentor there used to say:
"Be careful, here they pray to the God of technology & believe all society's problems have technological solutions".
I think I never fully grasped the weight of his warning until this AI hype cycle.
"Here I collect a selected set of critical lenses on so-called ‘AI’, including the recently hyped
#ChatGPT
. I hope these resources are useful for others as well, and help make insightful why we need to remain vigilant and resist the AI hype."
Dear colleague scientists,
Trying to prove a point here.
Can you tell us which philosophers and/or subfields of philosophy have been vital for your science?
Please RT for wider reach.
#ScienceTwitter
#Philosophy
Someone just stated: “intelligence as search” is “a new paradigm for cognition”.
This idea is not new at all & was already proposed in the 1950s.
It is a.o. this type of historical ignorance & amnesia that make present-day AI so susceptible to hype.
I am not taking community feedback? Haha
LLM-bros are not my community
I care about communities harmed by AI hype, including marginalised communities and the cognitive science community
@IrisVanRooij
These researchers seem to be 8-12 months behind on this and don’t take any community feedback.
With multistep self-structuring LLMs consistently make EXCELLENT summaries and can simulate reasoning ability extremely well albeit quite inefficiently.
@Jabaluck
The majority of my peers also know better and do not take these (fake) "estimates" seriously 🙂 Anyone who deeply understands what it takes to create AGI, knows better.
@j_r_wheatley
@skidadesert
I’m a scientist. I think we’re good 😉 It’s a kid’s science project. It’s about learning, curiousity and fun. Of course we will also discuss the limitations of the method. All part of the learning experience.
📢 Now out in Perspectives on Psychological Science, paper w/
@giosuebaggio
: "Theory before the test".
A THREAD with background info & highlights 🧵👇 1/n
I feel there must be a word for (often) men asking for "empirical proof" in an attempt to undermine (often) woman's expertise, while in fact they lack knowledge and expertise themselves
This is the type of mansplaining women come up against everyday. You MUST watch this immediately 👏🏻
“As far as I’m aware she does not have any degree in economics.”
“I was and I remain a professor of economics.”
If a woman who is your friend makes an effort to genuinely communicate with you that something you did is or contributes to sexism, please know that she is doing that because she cares and trusts you can understand. Don't betray her trust by being fragile and ignoring her.
ML standardly talks abt *human-level* inference as the "gold standard". But unlike perceptually inferring 'cats' or 'chairs', personality CANNOT be "inferred" from faces. People *project* their prejudices *onto* faces. Here "scientific modelling" hides pseudoscience. 3/
I’d rather “resist the urge to be impressed” (as
@emilymbender
says ) All these systems are automated plagiarism with hidden human labour, stolen work, and worrisome environmental impact. (Also, it gets things wrong in so many ways. See next tweet.)
Cog Sci question: am I allowed to be impressed at the things generative AI can do (realistic video??), while also unimpressed with it as a model of human cognition?
It can create cool trippy images, but it doesn't seem to understand them. But that's still impressive!
“The AI programs that everyone is talking about cannot exist without data: billions of images, tons of text. The companies behind it have therefore emptied the entire internet - without permission. There is a nice word for that: theft…” (English translation)
“De AI-programma’s waar iedereen het over heeft, kunnen niet bestaan zonder data: miljarden beelden, bakken vol tekst. De bedrijven erachter hebben daarom het hele internet leeggetrokken – zonder toestemming. Daar is een mooi woord voor: diefstal …”
@IrisVanRooij
@IamVisla
@o_guest
Which is why we probably see so many Mechanical Turk situations coming out lately with what businesses claim is AI/ML, but it's just an algorithmicly-enhanced human worker (looking at you Amazon...)
It's informative that my responses annoy the men (only men it seems) in this thread who like to believe we will have AGI soon. It is not a coincidence that I am said to be "snotty", "just making assertions" & getting blocked. I could not possibly know what I am talking about 🙂
Now reading: "How to explain behavior?" by Gerd Gigerenzer, in the journal TopiCS, special issue "Levels of Explanation in Cognitive Science: From Molecules to Culture" 1/n
Saw this post on LinkedIn. It is “liked” by several university faculty/scientist.
It seems a replicability crisis was not enough, so we are creating a crisis of methodological distortion. /s
📢 Excited to be able to finally share this tutorial for Social Psychology: "Formalizing verbal theories: A tutorial by dialogue" by
@MarkBlokpoel
and myself. 🧵👇 1/n
"Improving" the world by promoting the use of generative AI (a.k.a. plagiarism machine) with a terrible ecological footprint to so-called "imagine" more sustainable city planning, instead of paying professionals to imagine, design and draw.
Researchers have used artificial intelligence to create visualizations of what a less car-centric city might look like. Here’s how sustainable policies could get a boost from these AI-generated visuals:
My mother fled Hungary in '56 with her mom, dad & brother when she was 11 (the age of my daughter now). The Netherlands welcomed them with open arms. Still thankful for the life given to her (and me). Please, I beg the world, help these fugitives and welcome them with open arms.
Data are not ‘evidence’. Data can have evidential value as part of an argument for or against some claim. But it any case ‘evidence’ is relative to a claim.
PSA: Many people may not be aware that the author of "Invisible Women: Exposing Data Bias in a World Designed for Men", Caroline Criado-Perez, has been spreading transphobia. Please do not mistake promoting her work for inclusive, intersectional feminism. It isn't.