It's a big day.
Glaze, our tool for protecting artists against AI art mimicry, is now available for download/use at
Glaze analyzes your art, and generates a modified version (with barely visible changes). This "cloaked" image disrupts AI mimicry process.
@radiantNickangr
None whatsoever. We have zero interest in tracking artists and how they use Glaze. Once Glaze is installed, you can run it forever in offline mode. The only thing you would miss would be notifications of software updates.
I’m sorry, but I can’t be quiet about this.
@stanfordHAI
, who exactly is supposed to represent the artists impacted by AI in your speakers list? The two visiting artists hosted by and funded by HAI? Or the employees from or Google who outnumber them 2:1?
Just announced: As AI systems push the boundaries of human expression, we are forced to confront questions about creativity, authenticity, and ownership. On May 24, join artists and technologists as we explore these complex issues at our Spring Symposium:
Someone wrote this. I looked at it, and I’m amused they decided to associate themselves with Nightshade. Good SEO. The techniques they use have nothing to do with nightshade. And no nightshade is not a GAN, that’s roughly 6 years out of date.
More information on the project page, along with a detailed user guide and FAQs.
Many thanks go out to our friends/collaborators in the artist community, especially
@kortizart
,
@apocalynds
and
@NathanFowkesArt
.
Let me save you a little time… OpenAI releases a model that “respects” copyright, which pays creatives $ when model uses their content. Except onus is on creatives to *prove* their content or styles is used. At large scale, this is basically impossible. Tada, OpenAI is ethical!
This is amusingly misguided. The idea a company would pay $335K for a skill that can be summed up in a twitter thread… someone needs a refresher on labor market supply & demand. If this tweet reaches enough people for whom this is relevant, then companies do not need to pay $$$$
AI prompting is the next biggest skill to learn.
Companies are now paying up to $335,000/year for Prompt Engineers.
Here are the most Advanced ChatGPT prompting techniques that most people don't know about:
A thread 🧵
Don’t know who needs to hear this, but AI-detection is a near-impossible problem in today’s adversarial setting. Whatever API someone is selling about their AI-detection service, it’s likely overclaiming. Generative models can be patched to avoid artifacts used for detection.
Just got off the phone with a GF, in tears bc after two weeks of writing, research, proofing & tweaking an essay, her instructor accused her of using an AI bot. She’s 51, didn’t know AI bots exist, and is mortified at the accusation. AI Checking APs are just as faulty as AI.
If it’s not already obvious, there has been no release of nightshade in any form. You can be certain anyone offering an “antidote” is lying. You can’t test, much less “cure” what you don’t have.
That antidote is entirely written by chatGPT. That’s the real reason why it’s so long
@MAiJiNTHEARTIST
We're an academic research group. We don't take in revenue or donations, so running a web service with an API is not practical. Also, from a security standpoint, it is much safer for artists to cloak their own art than to send it across the network for glazing via an API.
A Chinese artist/creator spoke out against AI and refused to use it and in response AICels doxxed her, stole her work and trained it and then threatened to find and r*pe her.
Totally normal world we're living in.
Wow, so it really is happening… How do you “own” someone’s likeness in perpetuity for $100??? This is right out of some bad dystopian sci-fi novel and I want out!
The first week of the strike, a young actor (early 20s) told me she was a BG actor on a Marvel series and they sent her to “the truck” - where they scanned her face and body 3 times. Owned her image in perpetuity across the Universe for $100. Existential, is right.
This Forbes article sums up stability ai exceptionally well. “The AI Founder Taking Credit For Stable Diffusion’s Success Has A History Of Exaggeration”
“exaggeration” here is a kind euphemism for fraud and lies.
Time to redistribute this petition again. Yesterday’s hearings showed just how much a real artist voice can change the conversation on capitol hill. We need diverse voices in this process, and those whose lives are most directly impacted need to be heard!
A reminder for PhD students and young colleagues, just a reminder that you are not defined by your paper (rejections|accepts). Randomness plays a significant part in most paper/proposal review cycles, and even more so for larger conferences (CHI, security confs, ML conferences).
Another proud advisor achievement unlocked. 3 of our PhD students (all coadvised with
@heatherzheng
) were named to the Forbes 30 under 30 list today, for their work on the Glaze project!!
Congratulations to
@shawnshan_
,
@jennacryan
and
@em_wenger
!!
Update: Beta 2 for Glaze is now live and available for download. The frontend UI is similar to before in structure but lacking some of the bells/whistles. Please read notes on the download page for bugs fixed and known issues.
Had amusing exchange with someone defending LLMs as not memorizing content, particularly copyrighted content. So I jumped on chatGPT and tried a little experiment. Folks can decide for themselves.
12 prompts, each producing exact copy of a text snippet verbatim. There’s more…
Finally released the big Glaze update early this morning. Much more subtle and now usable for flat art styles and comics. Take a look. Oh and brand new website featuring lots of artist friends!!
🎉 🎉 🎉
Another big day. The big big Glaze 1.0 update is out. New core optimization function makes Glaze MUCH more subtle. Now works well for most flat-color styles, comics, anime, cartoons. Nearly invisible on most textured art images. Also brand new website!
Artists please note:
This guy is making up stuff to try to scare folks. He claims the Glaze license makes artists agree to scraping. He intentionally misreads it to spread fear, then locks comments so I cannot dispute his claims. This gives u right to use SD models, nothing more.
I’ve been asked so many times after a keynote talk or panel, by concerned parents/students/working professionals: “will AI do my job and replace me/my child/my loved one”, almost always the answer is yes. “What jobs are safe from AI replacement,” I really don’t know.
The harms of generative ai are not a hypothetical in a future scenario. They are not some terminator sci fi nonsense scenario. The real harms are happening NOW. First coders, artists, writers, performers, musicians, soon everyone else.
The time for solidarity and action is NOW!
No, nightshade shows no jpg compression artifacts, you can’t detect it w/ frequency analysis or file metadata, or any of the other methods mentioned. I don’t mind the tool as a forensics tool. Not bad to use basic tools, but they will do nothing against output of smart ML tools.
@DamiDraws_vt
It does not prevent an AI model from training on your art. But when the model trains on your art, glaze prevents that model from trying to generate art while replicating your style.
OMG I’m so sick of the “chatGPT replaces X” where X is actually information retrieval.
Good luck with your gpt engine makes up shit about your company’s finances and you use those results in your legal filings. Oh, also good luck after sharing your private data with OpenAI!
SQL is going to die at the hands of an AI. I’m serious.
@mayowaoshin
is already doing this. Takes your company’s data and ingests it into ChatGPT. Then, you can create a chatbot for the data and just ask it questions using natural language.
This video demoes the output.
🤯
What happened after last “huge letter signed by all AI companies to pause AI”? Oh right, nothing, and within a few weeks, one of the billionaires who signed it declared he’s building his own LLM.
The whole “help me, my business is going to destroy mankind” schtick getting old.
Today I gave a presentation at the Art Law conference about my artistic journey, up to discovering my name and over 50 artworks of mine have been exploited for AI image generation without my consent, compensation, or credit.
This is absolutely brilliant. And includes an incisive and clear indictment of the nonstop hype machine that pushes the inevitability of generative AI.
This happened. A PhD student took his own life because of alleged pressure to commit and perpetuate academic fraud. After reading this article and the notes/text messages in both languages, I am floored and incredibly sad. This is the worst of academia, and we cannot let it stand
So
@Medium
has published documents describing the scientific fraud and abusive lab culture that allegedly lead Huixing Chen to take his own life. Trigger Warning: (1/n)
Thanks everyone for their well wishes! Humbled and honored to be chosen. They say these are much more a reflection of our students and colleagues than anything intrinsic. Certainly that is the case for me. It's been a fun ride so far, and (hopefully) not even halfway done.
Prof. Ben Zhao (
@ravenben
) was named a fellow of the ACM, becoming the seventh current member of UChicago CS faculty to receive the prestigious honor.
At what point does a ridiculously large # of images generated indicate a problem? How many distinct users do you have, and how many discarded images do they have to go through in order to get what they wanted? How many kW/h is that burnt powering GPUs on the backend?
Lightbox was just amazing. So many great friends, so many fantastic artists, and so much inspiring art. One last photo w/ the glazed Musa on display as I head out.
I worry about this as well every time I see one of those QRT threads. Sharing art is great, but given what Musk has said about training his own generative AI models, please be careful. Consider using Glaze to protect your art before posting publicly.
I've been enjoying all the lovely work on these threads that say, 'QRT with your *insert description* art'
But hang on. Who started them? Suddenly they're everywhere. Call me paranoid, but this is about gathering data sets for AIs to scrape, right?
You know what else is insane? LOL. Someone tweeting about a paper that supposedly comes from his own company like he’s shocked by it, a paper that supposedly has poisoned everything since Jan 2023, and also “we have a defense, contact us for pricing.” 🤣 oh but there’s more…
Wow this is insane, the adversarial data arms race for large AI models has officially begun. I knew poisoning LLM's training data was possible but apparently it's live in the wild.
Even a single article has the power to bias a model to hold any opinion an attacker wants...
@bigdbaggino
lol if you call everything you don’t understand snake oil then I’m sure most/all of machine learning seems like snake oil. That’s quite alright. You don’t have to believe it for others to benefit from it.
Without ethics, without regulation, this will be the future we build with AI. Human identity and uniqueness sold for cheap as a commodity. This is already happening to artists (art style mimicry) and voice actors (voice modeling/synthesis).
The studio’s A.I. proposal to SAG-AFTRA included scanning a background actor’s likeness for one day’s worth of pay and using their likeness forever in any form without any pay or consent.
@tnynfox
That's an interesting thought. Releasing source/licensing does mean more risk from adaptive attacks. Maybe we'll explore that down the road. But for now, we'll let artists use this in the safety of their home/own computers.
Mixed feelings on this. While it’s nice to see a Turing winner join in warning the public on the risks of headfirst rush into generative AI, it is frustrating that it took him this long. Meanwhile, big tech and startups keep pushing ahead: $$$ over ethics.
This entire thread is spot on. So many folks are fooled by some LLM passing some exam designed for humans, that they forget how many versions of those exams (and solns) are in training data. For many areas (including CS) it is ~impossible to come up with original questions.
Testing on human exams is flawed, most of such exams recycle questions because it's understood that a human doesn't scour the entire internet before taking exams.
There is always a chance that language models, trained on millions more data points, just recite answers.
Incredibly frustrating. One can only hope that producers will learn from the backlash for this event that pleading ignorance of AI is just insufficient in today’s creative landscape.
I’m so sorry for all the immensely creative and committed artists who worked on Secret Invasion and whose work will now go to waste because Marvel has, through their own greed and callousness, set off a well-deserved boycott.
AI-generated images are winning art contests, adorning book covers and leaving human artists worried about their futures. “That data is my artwork, that’s my life. It feels like my identity,” one woman said.
A new tool is trying to protect human-made art.
@instacrewberlin
Really? Your read of artists modifying their own art before posting online, is that they are "polluting publicly available data?" Artists own their art, and they can do whatever they want with it. Would you consider every photo you've uploaded to instagram or FB as public data?
Yup, using someone’s likeness to generate an ad for you, without their consent or compensation. And they wonder why SAG is voting to strike?
If you spent a lifetime working to build your identity as an actor, how much is your identity and reputation worth?
@BadMuthaHubbard
@katriaraden
@TheGlazeProject
also there is a paper coming out at CCS 2023 (one of the big 3 in computer security conferences) that effectively protects human voices like glaze does for art.
Some Glaze updates:
1. Looks like glaze needs its own twitter so I can stop using my personal account. We registered
@TheGlazeProject
and will start using it soonish.
2. Windows GPU and metadata issues are taking longer than expected. Will push update when ready. Sorry for delays
For folks not intimately familiar with issues centered around harms of generative AI models, this is an excellent paper that really provides critical context and clarifies the key issues in this complex issue.
A must read for anyone who is working on or wants to work on genAI.
On that note, as per my previous tweet, here's a paper put together by industry experts and authorities on the topic of generative AI/Machine Learning and the impact this had on artists.
I suggest you read it and share.
Reminder to PhD applicants and faculty candidates. Please please please make a homepage. It's one of the best ways to present yourself to the academic and research community at large. Lots of easy/free options, e.g. . It's almost 2020. It's time.
It was amazing to watch
@kortizart
present the case for protecting rights of human creatives against the tech industry’s push for hype and superlatives and minimal accountability. Passionate, eloquent and clear, and clearly recognized as such by the senators.
Phew. Testimony all done! It was a true honor to have been a part of it, and I can’t thank the Senate enough for the immense honor to bring our issues forth!!
Now to get a drink, draw and play some video games later!!! 😆
OH MY!
Iddo Drori posted paper & dataset without knowledge or consent from coauthors, AND misrepresented fact that he did not have permission from instructors to collect assignments/exam questions for the dataset.
Paper withdrawn
MIT: “and no, GPT-4 cannot get an MIT degree.”
Thanks Jessie.
My response to the ArtCenter article was written in a rush, but I think is a a pretty accurate summary of how I feel about generative AI for art.
I give credit to Mike for posting my email to his article. I’m sure it was not easy to read or post.
Remember when ArtCenter decided to dip their toe in the generative AI pool and all the students and industry folk raised the alarm? This was their weaksauce response to
@ravenben
Wait you mean identifying generative content is hard? and low accuracy you say? 26% true positive rate with 9% false positives?
But what about the voluntary steps you promised you’d take for AI “safety”? You’re extending your AI detector to audio and visual content? Awesome!
@timnitGebru
See also: OpenAI Quietly Shuts Down its AI Detection Tool (via
@dragonwolftech
@decryptmedia
)
"As of July 20, 2023, the
#AI
classifier is no longer available due to its low rate of accuracy."
Uh yeah.. because it was not in its training data, so it doesn’t know what tokens should be predicted with highest probability.
I’m still surprised by people who are surprised when LLMs act like the token predictors they are, not the “intelligent” machines they’re hyped to be.
Writing code that is so new even gpt-4 has no idea what to do. Using the latest versions of libraries, gpt-4 doesn’t know how to solve the errors - actually slows me down trying to prompt it when I can make sense of it quicker
Errrr okay, but there’s no reliable watermark that can’t be easily broken or disrupted? How exactly do they plan to deploy something that doesn’t exist yet?
Meta, Google, Microsoft & OpenAI have agreed to add a new watermarking system informing users when content is AI-generated.
This is not to protect users or anything, it's still a selfish move to avoid their models from choking on AI data.
One of the totally unexpected but amazing perks of working on
@TheGlazeProject
is that we’ve gotten some amazing art from artists we worked with. The whole team is thrilled. We frame this amazing piece by
@eballai
in the lab.
Wow Twitterverse! The glaze tweet is now up over 2M views and somehow accelerating, judging by the # of messages and responses/retweets etc. I'm exhausted trying to answer questions and clarify concepts/statements etc.
Gonna go do my day job now, going AFK & prepare for a class.
Great article from
@kenklippenstein
at TheIntercept, discussing the realism of challenges in using generative AI and its role in the WGA/SAG-AFTRA strike. I contributed a few quotes.
hey all.
Quick note. Glaze server is doing some maintenance tonight, so it will be up after morning reboot.
We expect to push another update tomorrow, with some more bug fixes. Including a work around for some virus checkers that are flagging a model resource (false positive).
@PAWZ212
Easiest way to detect total frauds, is when they make claims about things that don’t yet exist. Like “oh there’s a mole on the nightshade team.” Like srsly AI-bro? This is not some corporate team! You have no idea how hard my students work and how much they believe in what we do.
Another big milestone in a busy year. The amazing
@em_wenger
defended her fantastic PhD thesis in spectacular fashion today. Now she has huge (but so exciting) decisions to make about her future! Super proud advisors (
@heatherzheng
and me). Onwards and upwards!
It’s clear that watermarking output of generative AI models is a really challenging (and potentially intractable) task. Here’s another paper showing how current watermarks on AI-generated images can be effectively removed with minimal impact on quality.
Hmm. I knew Japan was very Pro-AI. But I didn’t see Japan effectively turning itself into a surveillance state, and for profit, no less. Will be watching to see if there are legal or regulatory followups to this. cc
@kashhill
A Japanese company that installs a large number of cameras in the city, converts camera images into features, and makes the data open source.
The company monitors the daily activities of passersby and tracks their "clothes," "belongings," and "movements" to sell to companies.
@kortizart
Just to expand a wee bit more. Glaze's cloaking effect is not like a watermark or hidden signal. It studies the AI art models representation of "artistic style," then disrupts it in that dimension. Kinda like changing the ultrasonic melody of a sound for dogs (if models=dogs).
@Rahll
@BrandonLive
@timnitGebru
soooo, i didn’t read the long convo you guys had. but this is about whether chatGPT “remembers” content it trained on, yes? Well, I just hopped on and spent 5 mins on chatgpt. Ever hear of harry potter? I imagine it’s copyrighted, yes?
In light of news today about a particular service to “defeat” web scrapers by collective IP blacklist, I want to offer some basic facts about network security.
1. IP addresses are nearly impossible to blacklist, because they are dynamically allocated in many/most networks, 🧵
@bigdbaggino
Nobody declared mission accomplished. Read the website for the section on limitations. Also we also tackled facial recognition 3 years ago. . Its not perfect but significantly raises the bar for attackers. And nobody is paying for anything. Glaze is free.
Perhaps this is known to some, but this is incredibly important for anyone trying to understand what the real issue is with most of the generative AI models today. It’s rarely the fine tuning data that’s the issue. 👇👇
Okay. Inspired by news &
@stealcase
, let me clarify something.
When AI companies release "open training data" for a model, they're generally sharing *fine-tuning* data. The big issues w data and consent are NOT of this type. The issues are with the MAIN DATA used in training.🧵
Scarlett did this absolutely brilliant piece for me in a single afternoon. I used it as the clincher on a keynote I gave this week at Big10+ conference. A perfect representation of the road we’re headed down. The masses of humanity working for AI instead of the other way around.
~ Adapt or Die ~
I made this for the brilliant
@ravenben
and naturally, it's protected by
@TheGlazeProject
We're not up against machines, but against greedy people with machines. Let us not let it get this far.
🚨 The EU AI Act is moving🚨
🤖 Generative AI tool will have to disclose what, if any, copyrighted materials were used for training
🤖 3 risk levels remain: prohibited, high, and low
Three CS professors walk into a Zurich bar; one Indian, one British, one (currently) American. No one can figure out whose country is the biggest global disaster right now. We all drink.
#truestory
Oh come on. You mean LaundryBuddy was not enough to justify the millions spent on model training, and the significant negative impact on the environment?
Baidu CEO: Building AI models is an ‘enormous waste of social resources.
Apparently, there are countless foundation models in China, yet there are barely any practical applications. It shows how easy it is to build an LLM and how difficult it is to find a use for them.
AI in 2023. Company X announces they’re moving to/adding AI, proceeds to lay off a ton of employees, and then starts training models on their customers’ data.
@images_ai
@JonLamArt
Data poisoning is a term of art used to describe specific vulnerabilities of neural network models. It’s a technical term used commonly to describe specific attacks and defenses in security literature.
Mea culpa. We released Glaze using a front end user interface that reused significant portions of code from DiffusionBee (GPL license). A careless mistake we are now rectifying. We are releasing the source code for Glaze front end, and also working on a rewrite of the frontend.
It's a big day.
Glaze, our tool for protecting artists against AI art mimicry, is now available for download/use at
Glaze analyzes your art, and generates a modified version (with barely visible changes). This "cloaked" image disrupts AI mimicry process.
Was waiting for this. Misinformation to mislead couched in the guise of parody. Just to be clear, this is made up BS and has nothing to do with the Glaze project.
New from the UChicago's Glaze Project, a Spray on Glaze that creates a real life protective cloak for your artwork.
Says lead researcher Ima Hiden, "We weren't satisfied with how useless our digital tools were, so we decided to help artists protect their physical work even…
Yomiuri Shimbun, largest circulating newspaper in Japan, is doing a series of articles on generative AI & impact on art. They did an article on Glaze, where I discuss some issues facing artists.
Another article w/
@SarahCAndersen
AI art bros HATE this one trick!!
here's a fun dive into nightshade, the *free* tool poisoning ai model training and giving artists a chance to fight back against nonconsensual scraping
ty
@TheGlazeProject
@ravenben
@Kelly_McKernan
@saltybretzel
@uratoh16310
Thank you! First, I want to say that artists should be very careful with their $. It is a tough time and please do not put yourself into a tougher financial situation because of Glaze.
But if you are sure you want to donate, we have a campus contact who manages this:
Of course AI has great uses. AI for science, for medicine, for drug discovery. So many new capabilities to enhance our lives. But they do not require generative AI. Discriminative AI tools do not scrape content without consent or generate plausible mimicries while ignoring facts.
Definitions in Sec 1 clearly state “contribution” refers to the model, and has nothing what so ever to do with your art. 🤦🏻♂️ Please read the license yourself and decide for yourself.
The
#AIAct
’s committee voted and approved a new version of the AI Act, a version that shows that the European Institutions and the AI Act rapporteurs have listened to the concerns of the creative community and to our requests.
This new text is a huge step in the right direction.
From my personal perspective, Michael I. Jordan has dominated the field of statistical machine learning, modern ML/AI, like the other Michael Jordan did his. It’s a relief to hear his balanced, pragmatic, and honest perspective on what generative AI really is. Thx Mike!
Michael I. Jordan on ChatGPT and AI: "We're fearful in 10 years it can get out of control and all that. That's just not true. ChatGPT is predicting the next word in a sentence, just remember that's what it does".
1/3
So important to read & listen. Generative AI does not come out of just scraping everything online (including copyrighted and private data), but it requires a population of humans working to filter out the absolute worst trash. Maybe we won’t work as batteries, just trash curators
World, meet Alex, Bill, and Mophat, three workers whose labor was essential to filtering violence and abuse out of ChatGPT.
For the first time they’re ready to tell you who they are—and how the work unraveled their lives and their families.
Misspoke. No simulation, no actual tests.
A “thought experiment” done by an external party.
Yeah, there’s no AI-doom hype here at all. Just a run of the mill day where govt spokesperson makes up sensationalist stories on terminator style AI.
@DanFessler
Actually it's quite different from a watermark. There's no fragile hidden message to disrupt or destroy. It simply figures out AI models' mathematical representation of "artistic style", and changes the art in that dimension. Like changing ultrasonic freq of sound for dogs.
Today I learned about the EGAIR organization to protect art from AI in Europe. It's a great grass roots effort to impact decision makers in the EU for change.
Sign the manifesto: protect our art and data from AI companies - Sign the Petition!
Well said Margaret!!
In this day and age, maybe even that is just asking way too much?
How about a very basic step of demonstrating some minimal level of transparency and just disclosing where all this “critical training data” came from??
Dear tech industry,
Instead of having a race for who can put the highest numbers on (awkward) benchmarks, can we have a race on who can implement the best mechanisms for data consent?
It’s Halloween and time for our annual lab photo in costume. This year the students voted for Dungeons & Dragons as the theme. Apparently they really enjoyed the movie. How many Thayan Red Wizards can you see? Oops I still need a fuming flask of Belladonna to complete my costume.
Glaze finally has a windows GPU version for download.
Side note: if you are using this and glaze speed is very fast, then consider turning up Render Quality to increase the protection on your image (for whatever intensity level you chose).
Ok, the wait was not long.
has been updated with the Windows GPU version. The download is bigger (sorry), because ginormous PyTorch GPU libraries. But so far the speedup seems ridiculous.
Have fun. Let us know w/ comments here how well it works for you.
Very special guest arrived today. She will be visiting our lab and finding a permanent home as the centerpiece of our little budding gallery! Thank you
@kortizart
!!! 🤩
Thrilled to be able to announce that
@ineffablicious
(Marshini Chetty) and
@feamster
(Nick Feamster) from Princeton will be joining our faculty at UChicago CS this fall!! Can't wait!!!
Can't quite believe I'm writing this: Today, 150 African workers behind ChatGPT, TikTok and Facebook voted to unionize at a landmark meeting in Nairobi.
These AI workers are invisible, underpaid, and the backbone of the tech in all our pockets:
As AI displaces and implodes entire human industries (starting with the most creative), it will fill the internet with cheap copies of human created content. When there are no humans to create the training data they need, models will stagnate while ingesting their own output.