The Bureau of Multiversal Arbitration is an unusual workplace. Maude Fletcher’s alright, though she needs to learn how to turn off caps lock in the company chat. But trying to deal with Byron G Snodgrass is like handling an energetic poodle, and Phil is a bit stiff.
Sorry, that was unclear. Byron G Snodgrass is an energetic poodle. Phil is a plant. A peace lily, I think.
The three work as arbiters, managing a few hundred caseworkers as they carry out the work of the Bureau: scanning through the multiverse for inspiration, information and innovation. Although, if you ask me, the Bureau’s gone a little off-course recently. Is it really a good use of all that technology to set me to work finding the best meal in all of existence?
Let’s part the veil. The BMA is the setting, and title, of a … thing, created by game company Aconite, helmed by Nadya Lev and Star St.Germain. I say “thing” because it’s not clear how best to describe what the pair have made. Calling it a video game summons up all the wrong impressions, but it’s hardly an experience or a toy, either. A larp (live-action roleplay) might be closer if it was live action, but it’s not: BMA is played in a Discord channel, the gamer-focused chat app standing in for the Bureau’s internal slack. St.Germain calls it a “Discord game”, which works well enough.
The Multiversal Search Engine at the core of the game is actually a carefully managed version of the Stable Diffusion AI image generator. Players are given assignments – like finding that dessert – which they use as prompts for the image generator, competing with enough others to generate the best responses, with the winning creation, voted on by all players, being stuck on the virtual fridge for everyone to see – and, if you’re lucky, praised by Maude.
It’s one of the most exciting and innovative uses of AI image generation that I’ve seen, and that’s no accident. “A lot of people are villainising this tech,” said St.Germain when I called her this week. “And it is scary, it does incredible things: you type in something and all of a sudden you’ve got this image from another world.” But she was fascinated by the possibilities. “The way I think about it is that this world already exists – you just need to find the things within it.”
That’s the genesis of the game, reframing the hallucinatory aspects of AI creation as a feature, not a bug. Unless you want bugs, of course. Or something more outré still, maybe? Like one of the near-winners for the meal prompt: “A creature with a thousand eyes and a million limbs, cooked in the style of duck à l’orange”.
The game’s narrative also allows St.Germain and colleagues to gently push players away from some of the less savoury aspects of the technology. Trying to generate “real” objects from alternate realities means there is little motivation to strip-mine the creative works of other artists, while prompts are selected to avoid the possibility of generating the gore or explicit content that Stable Diffusion can also pump out (a further filter blocks objectionable words, just in case).
“We’ve done a lot of work in the fiction and curation sides of things to prevent some of those things from happening,” St.Germain says, “but also finding ways to lean into it occasionally – to release the pressure but with something that is maybe a little bit tamer than what some people can do. We have a scenario coming up that’s meant to be an insect confectionary thing. You’re making bug candies. Because we wanted to pick something that some players are gonna want to lean into the gruesomeness of. Giving players the opportunity to say, ‘I’m gonna make a gross thing.’”
Surprisingly, running the Bureau is a full-time job for St.Germain. The Multiversal Search Engine itself is automated, but the non-player characters who turn a simple chatroom into a richly interactive experience – and ensure the players stay on-task and the community stays pleasant – are puppeted by her and her colleagues. “Everybody wants to focus in on, ‘What’s the tech going to do next?’ But the part of this that is the most important, that people are going to really lose sight of for a minute, is that what makes these tools work is the marriage with a human brain. The curation and narrative aspects of creating things, you need a vision to bring it all together. The place that this tech is going to go is when the tech can enable that human vision in a meaningful way.”
As a result, the Bureau is only operating for a month. The game will end next week: as a free experience that takes real labour to continue operating, it can’t run indefinitely. (There’s also the cost of the AI generation itself, although at around $1,000 for the month-long operation, it’s a comparatively small part of the pie.) It may come back in the future, but if you want to experience before then, the next few weeks are your last chance.
Maliciously harmful
The UK’s online safety bill is returning to parliament, under its fourth prime minister and seventh DCMS secretary since it was first proposed, back when it was the Online Harms White Paper. That many fingerprints on the bill has left it a monster piece of legislation, bundling the obsessions of every wing of the Tory party in at once.
That sort of triangulation, I’ve written before, has left the bill in a sort of shit Goldilocks zone: one where neither child protection groups nor free speech advocates think it’s a good bill. That either proves that it’s perfectly balanced, or that it’s bad.
It wouldn’t do to simply reintroduce Boris Johnson’s legislation, though, and so a new prime minister means a new version of the bill. On Friday news came that two new offences would be introduced to UK law. One, tackling “downblousing”, cleans up an accidental loophole in an earlier effort to ban “upskirting”. That law mentioned surreptitious photography of “genitals or buttocks”, and so accidentally left some kinds of voyeurism in the clear.
Another, taking aim at explicit “deepfakes”, is interesting on a deeper level. The plan is to outlaw the nonconsensual sharing of “manufactured intimate images”, targeted at images that have been generated using AI to show real people in explicit situations. But distinguishing between a deepfake and an illustration is surprisingly hard: is there a point at which a pencil drawing becomes realistic enough that someone could be sent to jail for it? Or is the act of using a computer to generate the image specifically part of the offence? We’ll find out when the text of the bill is released at some point in the next week.
On Monday evening there was another, more farcical, change. Bowing to pressure from the libertarian wing of the Conservative party, the offence of “harmful communications” has been dropped from the bill (although two similar offences, covering “false” and “threatening communications” have been retained). The clause had become a lightning-rod for criticism, with opponents arguing that it was “legislating for hurt feelings” and an attempt to ban “offensive speech”.
Why farcical? Because to remove the harmful communications offence, the government has also cancelled plans to strike off the two offences it was due to replace – parts of the Malicious Communications Act and Section 127 of the Communications Act, which are far broader than the ban on harmful communications. The harmful communications offence required a message to cause “serious distress”; the malicious communications act requires only “distress”, while the Communications Act is even softer, banning messages sent “for the purpose of causing annoyance, inconvenience or needless anxiety”.
The problem is that these offences, while horrendously broad, are also the only way to tackle very real abuse – and so if they aren’t being replaced with a similar, narrower offence, it could hinder attempts to seek justice for harrowing online harassment.
At the time of publication, it’s not yet clear whether the MPs who pushed for the abolition of the harmful communications offence have realised that their wish has been granted in the most censorious manner possible.
If this email caused you annoyance, inconvenience or needless anxiety, please be assured it wasn’t my intent.
If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.