The new copy-protection…

Commodore 64 computer with floppy drive and floppy disk

I regularly see articles going around for tools to fight against image generation AIs, either by masking the style, so a particular artist’s style cannot be copied, or poisoning the model of a system like stable diffusion that is trained using this image. I don’t think this will work, simply because this is the nth incarnation of copy protection. This idea has been around for more than 40 years, with the first examples in programs for 8 bit computers – typically games. Most of them got broken by hackers and they often caused problem with legal usage, by preventing backups, or causing compatibility problems.

The basic goal of copy-protection is schizophrenic: data needs to work in the good case, and not work (or work badly) in the bad case: the computer should be able to run the game from the floppy drive to execute it, but not be able to write it back unto another floppy. The music should be playable on an audio player, but not be readable by a computer’s CD drive, so as not to be copyable – so you could not play music on the computer, or a later audio-radio that used a computer’s CD drive. In the present case, the image file should be parsable by the standard software, and look like the original artwork to humans, but behave differently when looked at by an AI algorithm.

While in theory, there is a clear good case (human looking at picture), and clear bad case (AI learning from the image), in practice there are many intermediate scenarios of algorithms that process images: computing vignettes and thumbnails, classifying the pictures, explaining them for blind people, etc. So like all previous copy-protection systems, there is a serious risk of collateral damage.

The authors of the article mention the risk of bad actors using such algorithms, and this risk is extremely real: images that foil and confuse automatic classification could ironically be used to help the distribution of copyrighted images. More generally, this could be used to transmit images that are illegal or violate some standard without detection, or on the contrary cause moderations of content that looks harmless to humans, either as a denial of service attack, or for political posturing. If this happens, poisoned images won’t just be avoided by crawlers that train AI, they will be rejected by all systems: if you can’t automatically determine what an image is, it is safest to assume the worst.

This assumes, of course, these algorithm actually work and cannot be either circumvented or detected. This would be particularly hard for image files because whatever imperceptible change the algorithm does, needs to persist even after all the other more or less imperceptible transformations that images are subjected to on the internet, rescaling, compression, translation into different formats. The other problem is time: once an image is online, it can stay for quite some time, crawlers will evolve, the image won’t change automatically.

Data annotation would be a large challenge, just adding some EXIF tags with the origin of the work its license would be very helpful and break way fewer systems. It would also be an major undertaking. Processing images with an algorithm that does transformation would be a huge logistic undertaking.

More generally, whatever hack is used to poison the AI can be undone, because as long as human can see the image, the information is there. All the clever hacks that were deployed in the past for copy protection were broken for the same reason: you cannot have data that is both readable and unreadable. I suspect that like often, the problem these algorithms try to solve will go away by itself faster than a functional solution can be deployed: Adobe Firefly has been trained with a clean (non copyrighted) corpus of images, other implementations will follow suit. There was a point in time where media streaming was largely copied content…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d