570

A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission.
[...]
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows.

you are viewing a single comment's thread
view the rest of the comments
[-] egeres@lemmy.world 45 points 1 year ago

Here's the paper: https://arxiv.org/pdf/2302.04222.pdf

I find it very interesting that someone went in this direction to try to find a way to mitigate plagiarism. This is very akin to adversarial attacks in neural networks (you can read more in this short review https://arxiv.org/pdf/2303.06032.pdf)

I saw some comments saying that you could just build an AI that detects poisoned images, but that wouldn't be feasible with a simple NN classifier or feature-based approaches. This technique changes the artist style itself to something the AI would see differently in the latent space, yet, visually perceived as the same image. So if you're changing to a different style the AI has learned, it's fair to assume it will be realistic and coherent. Although maaaaaaaybe you could detect poisoned images with some dark magic tho, get the targeted AI then analyze the latent space to see if the image has been tampered with

On the other hand, I think if you build more robust features and just scale the data this problems might go away with more regularization in the network. Plus, it assumes you have the target of one AI generation tool, there are a dozen of these, and if someone trains with a few more images in a cluster, that's it, you shifted the features and the poisoned images are invalid

[-] vidarh@lemmy.stad.social 31 points 1 year ago

Trying to detect poisoned images is the wrong approach. Include them in the training set and the training process itself will eventually correct for it.

I think if you build more robust features

Diffusion approaches etc. do not involve any conscious "building" of features in the first place. The features are trained by training the net to match images with text features correctly, and then "just" repeatedly predict how to denoise an image to get closer to a match with the text features. If the input includes poisoned images, so what? It's no different than e.g. compression artifacts, or noise.

These tools all try to counter models trained without images using them in the training set with at most fine-tuning, but all they show is that models trained without having seen many images using that particular tool will struggle.

But in reality, the massive problem with this is that we'd expect any such tool that becomes widespread to be self-defeating, in that they become a source for images that will work their way into the models at a sufficient volume that the model will learn them. In doing so they will make the models more robust against noise and artifacts, and so make the job harder for the next generation of these tools.

In other words, these tools basically act like a manual adversarial training source, and in the long run the main benefit coming out of them will be that they'll prod and probe at failure modes of the models and help remove them.

[-] RubberElectrons@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

Just to start with, not very experienced with neural networks at all beyond messing with openCV for my graduation project.

Anyway, that these countermeasures expose "failure modes" in the training isn't a great reason to stop doing this, e.g. scammers come up with a new technique, we collectively respond with our own countermeasures.

If the network feedbacks itself, then cool! It has developed its own style, which is fine. The goal is to stop people from outright copying existing artists style.

[-] vidarh@lemmy.stad.social 5 points 1 year ago

It doesn't need to "develop its own style". That's the point. The more examples of these adversarial images are in the training set, the better it will learn to disregard the adversarial modifications, and still learn the same style. As much as you might want to stop it from learning a given style, as long as the style can be seen, it can be copied - both by humans and AI's.

[-] RubberElectrons@lemmy.world 1 points 1 year ago

There's a lot of interesting detail to your side of the discussion I may not yet have the knowledge of. How does the eye see? We find edges, gradients, repeating patterns which become textures, etc etc... But our systems can be misdirected, see the blue/yellow dress for example. NNsbhave the luxury of being rapidly iterated I guess, compared to our lifespans.

I'm asking questions I don't know answers to here: if the only source of input data for a network is subtly corrupted, won't that guarantee corrupted output as well? I don't see how one can "train out" the corruption which misdirects the network without access to some pristine data.

Don't get me wrong, I'm not naive enough to believe this is foolproof, but I do want to understand why this technique doesn't actually work, and by extension better understand how training a nn actually works.

[-] barsoap@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

if the only source of input data for a network is subtly corrupted, won’t that guarantee corrupted output as well?

We have to distinguish between different kinds of "corruption", here. What you seem to be describing is "if we only feed the model data from rule34, will it ever learn proper human anatomy" and the answer is no, it won't. You'll have to add data which narrows the range of body proportions from cartoonish to, well, real. That's an external source of corruption: Feeding it bad data (for your own definition of "bad"). Garbage in, garbage out.

The corruption that these adversarial models are exploiting though is inherent in the model they're attacking. Take... ropes and snakes and cats (or, generally, mammals). Good example: It is incredibly easy for a cat to mistake a rope for a snake -- it looks exactly the same to the first layers of the visual cortex and evolution would rather have the cat jump away as soon as possible than be bitten, and it doesn't hurt to jump away from a rope (even though the cat might end up being annoyed or ashamed (yes cats can 110% be self-conscious different story)), so when there's an unexpected wiggly shape the first layers directly tell the motor cortex to move, short-circuiting any higher processing.

That trait has been written into the network by evolution, very similar to how we train AI models -- conceptually, that is: In both cases the network gets trained for fitness for a purpose (the implementation details are indeed rather different but also irrelevant):

What those adversarial models do kinda looks like this: Take a picture of a rope. Now randomly shift pixels to make the rope subtly more snake-like until you get your cat to jump as reliably as possible, in as many different situations as possible, e.g. even if they're expecting it and staring straight at it. Sell the product for a lot of money. People start posting pictures of ropes, rope manufacturers adjust their weaving patterns. Other cats see those pictures and ropes, some jump, and others only feel a bit, or a lot, uneasy. The ones that jump will not be able to procreate, any more, being busy jumping, while the uneasy ones will continue to evolve. After a couple of generations no cat cares about those ropes with shifted pixels any more.

Whether that trains general immunity against adversarial attacks -- I wouldn't be so sure. It very likely will make the rope/snake distinction more accurate. But even if it doesn't build general immunity, it's an eternal cat and mouse game and no artist will be willing to continue paying for that kind of software when it's going to get defeated within days, anyway, because that's just how fast we can evolve models.

Oh. Back to the definition of corruption: If all the pictures of rope that our models ever see have shifted pixels then it's just going to assume that is the norm, and distinguish it from snakes because the tags say "rope" in one case, and "snake" in the other. The original un-shifted pictures probably won't be an adversarial attack because they're not a product of trying to get cats to jump.

[-] vidarh@lemmy.stad.social 1 points 1 year ago

Quick iteration is definitely the big thing. (The eye is fun because it's so "badly designed" - we're stuck in a local maxima that just happens to be "good enough" for us to not overcome the big glaring problems)

And yes, if all the inputs are corrupted, the output will likely be too. But 1) they won't all be, and as long as there's a good mix that will "teach" the network over time that the difference between a "corrupted cat" and an "uncorrupted cat" are irrelevant, because both will have most of the same labels associated with them. 2) these tools work by introducing corruption that humans aren't meant to notice, so if the output has the same kind of corruption it doesn't matter. It only matters to the extent the network "miscorrupts" the output in ways we do notice enough so that it becomes a cost drag on training to train it out.

But you can improve on that pretty much with feedback: Train a small network to recognize corruption, and then feed corrupted images back in as negative examples to teach it that those specific things are particularly bad.

Picking up and labelling small sample sets of types of corruption humans will notice is pretty much the worst case realistic effect these tools will end up having. But each such countermeasure will contribute to training sets that make further corruption progressively harder. Ultimately these tools are strictly limited because they can't introduce anything that makes the images uglier to humans, and so you "just" need to teach the models more about the limits of human vision, and in the long run that will benefit the models in any case.

[-] nandeEbisu@lemmy.world 11 points 1 year ago

Haven't read the paper so not sure about the specifics, but if it relies on subtle changes, would rounding color values or down sampling the image blur that noise away?

[-] RubberElectrons@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

Wondering the same thing. Slight loss of detail but still successfully gets the gist of the original data.

For that matter, how does the poisoning hold up against regular old jpg compression?

Eta: read the paper, they account for this in section 7. It seems pretty robust on paper, by the time you've smoothed out the perturbed pixels, youve also smoothed out the image to where the end result is a bit of a murky mess.

this post was submitted on 23 Oct 2023
570 points (86.4% liked)

Technology

60082 readers
3220 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS