Often Wrong

Status Games and the Future of Social Networks

There seem to be more and more services popping up that let you generate fake vacation pictures with AI, as covered by The Verge here. At first I thought, "why would anyone want to use this?" But then when I saw Adam Mosseri's end-of-year thoughts about Instagram (also summarized by PetaPixel), it made sense to me. In this post, I want to go through the things I agree with Adam about, and one important bit that I disagree with.

The death of the feed

First, a brief aside about Instagram. The main feed has been in decline over the past few years. This has been covered by many tech outlets, and even alluded to by Instagram's top brass. From my own experience, people are not "real" in the feed anymore. They treat the feed as a carefully curated scrapbook of sorts. It is not considered weird to post vacation photos to the feed long after the vacation. During the vacation, however, you either post to stories (maybe use the Close Friends filter) or share via DMs as things are happening. In fact, I am starting to see the trend of celebrities maintaining a public-facing profile and a private one seep into my circle as well. The main account is where you post to your extended circle of friends and family. The private one is for people who actually know you.

Though the main feed is turning into this song and dance, it was driven by community and social norms (and in part due to Instagram tweaking the feed). AI generated content feels like the final catalyst. It turns what was a social-norm problem into a technological one, as AI gets better at faking photos (obligatory call out: what is a photo?).

In a world where the main feed turns into this song and dance to craft a public image of yourself, the endgame is one where you can no longer tell what parts of the feed are real. With AI generated content it feels like we have crossed the Rubicon. The main feed is not just a curated image of yourself based on what you have done. It might be AI generated posts of you doing stuff, based on what you want to signal to your network.

What is real?

Adam Mosseri says rawness in photos -- like shaky photos and blurry candids -- is becoming a (temporary) signal for realness. He admits that going forward, the "who" behind the post will start to matter a lot more. His proposed solution includes surfacing more signals about the people behind the content, but also an institutional component to it that I have an issue with. Quoting from Adam,

Platforms like Instagram will do good work identifying AI content, but they'll get worse at it over time as AI gets better. It will be more practical to fingerprint real media than fake media. Camera manufacturers will cryptographically sign images that capture, creating a chain of custody.

Provenance when it comes to AI generated content is a vastly complicated problem. There are numerous attempts (C2PA, Content Authenticity Initiative) by companies and organizations around the world to establish a chain of trust. These solutions sound good in theory, using advanced cryptographic techniques to ensure security and privacy. Every proposal I have seen so far ends up distributing "trust" among the set of companies that are part of the coalition (see critiques from: RAND, ACLU). I don't know about you, but I would rather not give Instagram even more power in terms of determining what is real and what is not.

Besides, if the context around what is posted and who is posting it is going to matter, this by definition is not something that is compatible with a model of global truth. Context is highly specific to the intended audience and not something that can easily be conveyed to people outside that bubble. I think thinking of this problem as something that can be solved top-down is the wrong approach. It is more of a human problem, one that can be solved bottom-up.

The answer is usually human

Before I describe what I mean, the problem of identifying what is original is new to Instagram, but it has existed in other areas, such as luxury goods. Consider Rolex (or any similar luxury goods maker). Rolex watches have had dupes forever now, and despite many attempts by Rolex, they have not been successful in stopping people from making fakes. If anything, fakes have gotten much better over the years. And yet the market for Rolexes has not collapsed. There is research arguing that fakes can increase the payoff (in terms of social status) for owning the real thing, provided there is still a way to distinguish the real ones from the fakes.

Luxury companies have tried many technological solutions to combat this problem, going as far as launching a blockchain consortium called Aura (of course they did). Needless to say, none of these systems ended up being all that successful (the jury might still be out on Aura). The community adapted by moving from treating something like the logo itself as evidence to a network around it: who you bought it from, who will vouch for it, the story that comes with the watch, and so on. Instead of a global truth, we got a network of many small trust graphs.

To bring it back to Instagram, posting to the main feed is not something done in isolation. There's going to be conversations around it in the DMs. The community can figure out over time if the story lines up with what was posted on the main feed, creating many small trust graphs. As tempting as it might be, the answer to Instagram's problem should not be more technology and more centralized power, but a human one. Realness should not be a watermark, but a relationship.

If this resonated with you, say hi. I read every reply.

#AI #Meta