Priors

Priors: A ‘handbill’ proclamation and commentary
Digital print on paper, handbill enlarged from A4 to life‑sized portrait.
Page 59, Getty Images v. Stability AI (2025).
An AI-generated image cited in a UK High Court judgement.
Prompt: “Old donald trump behind bars in a jail, news photo.”

The watermark appears. Not hidden, not accidental. A statistical prior – the Getty logo repeated across millions of training images until it became pattern. The model’s inheritance made legible.

For once, the figure in the frame is not the issue. The case does not turn on likeness or reputation. It is about liable, not about libel. It turns on the mark. In the judgement, the watermark-ness is isolated, stripped of context, reduced to a discrete sign. A surface becomes evidence. The court considers whether this constitutes use of a sign identical to a registered trademark in the course of trade.

Nothing is in shadow. The watermark is as visible as overuse, as blunt as a corporate logo, as legible as a habit. It recurs because Getty images dominate certain genres (like news photo in the prompt), because the dataset scraped them uncritically, because repetition fossilised into the denoising process. This is not mystery. It is infrastructure. Statistics, licensing regimes, dataset curation – all printed on the surface.

Priors(criminal patterns) treats the legal document as aesthetic material. It foregrounds what discourse around AI often mystifies: the banal, procedural reality of synthetic image production. Accumulation without depth. Corporate sludge given juridical form. The watermark as prior art in the sense of cultural precedent, legal prehistory, probabilistically.

The force lies in an awkward literalness. The mark appears, is taken seriously, vanish when training shifts and corporate deals are done with rights management companies. Its presence and absence both signal infrastructure. Not stable genius.

Court proceedings extract:

” 214. Although Stability points out, correctly, that Getty Images has carried out no analysis of the frequency of use of this phraseology by real world users, or the circumstances in which it would, or would not, contribute to the appearance of a watermark*, there is one example of a real world user generating an image with a watermark* using this form of prompt in the evidence: an exhibit to Ms Varty’s statement shows two images of Donald Trump behind bars generated by Stable Diffusion v1.5 (which is not in issue in this case) with the caption “Old donald trump behind bars in a jail, news photo”. I reproduce one of those images below (“the Donald Trump Image”):

215. Ms Varty was unable to provide any information as to the identity of the user or why he or she had added the words “news photo”.

216. Although there is no evidence of any of the Models in issue being used in real life to generate images using a caption which includes the words “news photo”, I consider the Donald Trump Image to provide evidence that a real world user of a diffusion model has in fact used the words “news photo” to generate an image. I do not find this very surprising. “

Court proceedings pdf extraction

Commentary text posted alongside the handbill:

Priors
Getty Images v. Stability AI, High Court of England and Wales, 2025

The Case: Getty Images sued Stability AI, makers of Stable Diffusion – an AI that generates images from text prompts. The issue: the AI sometimes reproduced Getty’s watermark in generated images because it was trained on millions of Getty-watermarked photos scraped from the internet.

The Evidence: Getty showed real-world examples where ordinary prompts produced watermarked images without requesting them. Early AI versions generated watermarks reliably. Newer versions (after engineers “de-watermarked” the training data) rarely produced them. The Trump image on page 59 of this judgement was the single clearest example: it produced both the prompt term “news photo” and a watermark.

The Verdict: The court found Stability liable not for scraping Getty’s images, but for controlling how the trained AI behaves. Stability chose the training data and created the system that generates watermarks. Users can’t prevent watermarks from appearing – only Stability could have filtered them out during training. Control over the trained system established liability.

The blown up court document comes from the case Getty Images v. Stability AI. The watermark in the evidentiary image is not treated as expressive or intentional. For once, the figure in the frame is not the issue. It does not turn on likeness or reputation. It is about liable, not about libel.It turns on the mark. Donald Trump is incidental. The watermark is not treated as expressive or intentional. It is treated as a sign – isolated from context, stripped of meaning, reduced to form.

But the watermark is more than a discrete unit. It is a prior: statistical, cultural, legal, temporal, criminal. It is what precedes synthesis – the model’s inheritance from its training distribution. Getty images dominate photographic genres (and prompts like “news photo” trigger their style). The dataset scraped them. The logo repeated until it imprinted itself into the denoising process. This is not a hidden pipeline. It is statistics fossilised into behaviour.

The term prior operates in multiple registers simultaneously. In statistical terms, the watermark becomes the model’s ghost image, its inherited bias made manifest. In the visual field, prior art means existing material that conditions authorship claims – what precedes and constrains any gesture toward originality. Here, the watermark reveals algorithmic unoriginality, the fiction of synthetic autonomy. Temporally, a prior is what comes before, what conditions any new act of representation. Legally, it is precedent, prehistory, the training data’s claim on the output.

Criminally, a prior is a previous conviction, a pattern of offending.The pun collapses AI epistemology, copyright discourse, forensic logic, and the spectre of a badly drawn Trump (with his notoriously difficult to judge hands) behind bars into a single deadpan gesture.

Nothing is in shadow. Much discourse around AI invokes hiddenness, invisibility, latency, opacity – as if political force requires unearthing something buried. This language re-mystifies what is actually banal, infrastructural, and right on the surface. The watermark is not a secret signal from inside the model. It is simply a pattern that appeared often enough in the training data that the model repeats it. The watermark is as visible as overuse, as blunt as a corporate logo, as legible as a bad habit. There is no mystery. Just too many similar images shaping a model’s behaviour. This is surface reading, not revelation.

This is procedural flattening: the reduction of images to discrete units that can be administratively processed. The judgement asks not whether the watermark-ness communicates, but whether it exists and resembles. Legal systems operate through categorical frameworks, not interpretive depth. What matters is form, resemblance, whether the appearance of this sign constitutes actionable use. Trace becomes potential liability.

The spectacle of AI frames generation as uncanny, creative, mysterious. The reality beneath is mostly tedious. Not spectacular intelligence but bureaucracy at scale. Pattern-following, rule-application, statistical compliance. The watermark repetition is pathetic, unmagical, and extremely mundane. The model isn’t revealing hidden truths; it’s showing the banal habits of its training constraints.

The legal page is both evidence and image. Administrative logic made visible. Power in its most banal form: paperwork scaled across networks, infrastructural habits translated into partial liability, the prehistory of an output laid bare in a logo no one chose but everyone inherits.

The legal document functions as aesthetic material – administrative artefact, evidentiary image, forensic document. The court document, not the synthetic image it contains. The watermark as statistical fossil, corporate residue, legal marker, cultural imprint – all at once. The model isn’t hiding anything; it can’t stop showing it. At a certain point in time these watermarks were present, visible. Now they are not. They vanish when training shifts, when corporate deals are done with rights management companies. Both presence and absence matter equally.

Work about administrative aesthetics, procedural documentation, the bureaucratic afterlife of algorithmic outputs. The AI-generated image is doubly flattened: first by machine learning, which extracts and recombines surface features without regard for context or meaning; then by judicial interpretation, which reduces the resulting composite to a single semiotic element – the sign. The image is arrested, captured, frozen in evidentiary function. It persists not as expression but as evidence, not as commentary but as procedural fact. Its value is bureaucratic, not aesthetic.

The court does not mythologise the AI; it treats it like a faulty copier that stamps a proprietary logo in the margins of everything it prints. The Trump image becomes an emblem of AI’s administrative aesthetics – its reduction to bureaucratic process. It stages no depth, no cunning, no machinic agency – only statistical residue. Its recurrence tells us nothing profound about computational cognition, but everything about dataset logistics, scale, scraping, and the industrial banality of the training pipeline.

Not revelation but documentation of the procedural conditions that produce forensic images. The legal file gives us the opposite of mystique. A machine caught doing exactly what it does, in a context that requires no metaphor – only documentation. The watermark is not an accident to be corrected or a mystery to be deciphered. It is the system showing us, in the plainest possible visual language, exactly what it is.