How much of what we see online is real?
It’s a question we’re all facing - made worse by the fact that people often fail to look closely at the information they consume, and sometimes quickly fire it back into the world without looking at all.
In the case of millions of fake accounts and bots described by the New York Times over the weekend, the problem has reached such massive levels that if social media giants gave the same treatment to showing the impacts of bots and fake audiences as they are to Russian interference in the 2016 presidential election, it’s doubtful it would show that anyone has gone without at least one fake retweet or favorite.
In nature, a healthy ecosystem by definition rejects or minimizes bad actors to ensure variation and longevity. But in the case of social media platforms this problem can be deceptive, because most tech startups are optimized for growth and growth alone.
It can be hard to tell which came first, the money from Wall Street or the insatiable desire for innovation from Silicon Valley. Either way, “growth is good” has replaced “greed is good” and we’re all paying for it.
And if an ecosystem doesn’t reject a bad actor that’s focused on growth for growth’s sake, that bad actor eventually begins to overcome the ecosystem and destroy its value and/or lifeblood.
This is also the definition of what happens when cancer turns cells against each other within an individual person. Either the bad actor is removed/cut off, or becomes bigger than the ecosystem and destroys the overall value (whether that’s to users on a digital platform, or grasslands in Brazil, and so on).
What’s next (besides implosion)
There are a few hedges against the fake audience (and content) problem.
One is via regulation - politicians and their audiences demand change, including transparency about accounts and the removal of bots. Politics tends to run in short cycles, and live in an ecosystem with its’ own major flaw: too much money from lobbyists. But this is a viable option if people become more engaged.
Another hedge is for the people receiving the most legitimate value, like advertisers from large, reputable brands and publishers, to push back. Included is the option to remove their content/presence from social platforms. There is a risk to this, but even a few large brands suspending their advertising and/or organic content temporarily to make the demand is likely to draw attention to the problem and some basic changes.
Another tool is one that Twitter has basically ignored: allowing developers to build on top of the platform and solve for the problem from the user side.
After pulling the rug out from under developers, last year they added back in a new portal, but it’s mostly for advertisers and Twitter is at the same time refusing to allow anyone to deal with basic problems like hate speech or abusive trolls.
I think we’re likely to see at least one of the major social media platforms fall apart over the next two years, and my guess is the ones with the least open source principles and/or user feedback into them will probably be on the chopping black.