Long considered the sketchy backwater of online advertising and malware (though effective!), the technology powering spambots has, slowly and steadily, continued to advance to be more believable, less detectable, and more effective in engaging people online.
This has led to a curious trend of bots moving out of primordial soup of hocking dubious medicinal cures and stealing user information into more ambitious efforts to infiltrate and astroturf the political landscape on social networks and beyond. This has produced a number of interesting (and unsettling) revealed operations in the past few years with spam bots and fake identities online. They have been prominent in a few political flashpoints abroad, but also have been the subject of a few stories domestically as well.
While these were the incidents that were detected, what is more difficult to estimate are bot deployments that have been successful in shaping online discussion and remained (as yet) undiscovered. A deeper problem is one of assigning responsibility – even when revealed, one common issue is the difficulty of figuring out who exactly launched these campaigns in the first place. Both of these factors raise obvious problems in trusting the “truthiness” of seemingly emergent and organic movements online.
It is plausible to argue that we are only seeing the very beginnings of these efforts. Preliminary experiments seem to suggest that even relatively simple swarms of bots are able to generate significant changes in the structure of social networks online. Given that automated identities might reliably shape the patterns of relationships among users that depend on these platforms, swarms of them can effectively be deployed to shape the topology of social networks online on large scales.
This has, potentially, deeper implications than the mere ability to propagate a single news item. Insofar as social platforms increasingly come to inform the more durable norms, behaviors, and perceptions of whole communities (and are depended on by trusted mainstream media to assess “activity on the ground”), social engineering on this deeper scale may leave an enduring impact.
Problems beget solutions, of course. Indiana University’s Truthy project, for instance, attempts to assess the reality of memes based on their patterns of propagation on Twitter. However, an inevitable feature of these systems is that knowledge of their detection methods permit the creation of systems that subsequently evade detection.
To that end, if the ongoing competition in computer security between those uncovering vulnerabilities and those patching vulnerabilities is any indication, these bots might be the initial glimmerings of a larger emerging competition between “truth black hats” — discovering and leveraging social exploits in groups online and “truth white hats” — developing the active infrastructure to “patch” these cognitive weaknesses in the same communities. Whether the strategic advantage in this space of “social security” (sorry) accrues to the astroturfer or to those attempting to block those efforts remains to be seen.
One very obvious way that automated/semi-automated systems can warp the public discourse online is by participating in the comments sections of online publications. Even if the contribution is merely trivial and irritating, they still take time to skip read and consume valuable screen real estate, which makes following the discussion tedious. The net effect is to reduce real participation in discussion on public news sites and discussion fora.
Tools for hosts of such discussions to detect and obliterate such “contributions” would be of great service to humanity.
Pingback: Today’s Headlines | Streetsblog New York City
Pingback: Today’s Headlines | Body Local NYC