Civicist

CIVIC TECH NEWS & ANALYSIS

Bring on the Bots

There are lots of things that social media bots could do to enrich our online conversation, monitor those in power, shield us from hate speech, and support social movements.

  • Bots—particularly bots on social media—can’t seem to catch a break in the news lately. First, the Block Bot, a program designed to help Twitter users weed-out disliked content and people, simultaneously fell afoul of Richard Dawkins, members of the conservative press, and legal pundits. Next, an article in the MIT Technology Review outlined the ways social bots act as nefarious “fake persuaders” in online marketing and political communication. Forbes then published a lengthy profile piece on Distil Networks, a company championed by the publication as a battler of “bad” bots. Finally, a Slate piece outlined a slew of crooked, “artificially stupid” though dangerous, instances of automated software agent use.

    Over the last several years, in fact, journalists have increasingly reported on cases of politicians using bots worldwide during contested elections and security crises to pad follower listsspam and disable activists, and send out pro-government propaganda.

    That unsavory actors are using bots globally to their advantage is not in question. However, most stories on this topic fail to ask the bigger question. Namely, is it the nature of bots that makes their usage inherently problematic? Or, rather, is it the means used by the bots to achieve their ends and the intent behind them which makes them so objectionable?

    Deeper digging quickly reveals that there are beneficial bots of all kinds in operation on social media. Bots have been used to facilitate protest and have seen action in critiquing injustice. Consider Zach Whalen’s Twitter bot, @clearcongress, which works to highlight astronomically low congressional approval levels. Or @congressedits, which tweets every time someone at a congressional IP address edits a Wikipedia page. Bots can be used to keep powerful political actors in check.

    By providing automated monitoring, bots can act as a type of social prosthesis for communities of users online. Communities lacking human users to track and publicize political action can now make use of bots which—in the words of one journalist—radiate information automatically. This substitutes to some degree the role of the current events obsessed newshound typically played by humans in a community of users online. This can be important, as in the case of the congressional monitoring bots, and whimsical, as in the case of @stealthmountain, which creates a synthetic “grammar nazi” of sorts on Twitter.

    It is true that these bots may not be able to provide the deep analysis that a professional journalist would provide, but they generate awareness of issues where there previously was an information vacuum. To that end, well-deployed bots can help resolve an increasingly obvious challenge facing social media platforms: that the self-segregating nature of connections online tend to produce echo chambers that prevent people from receiving a diverse set of information. Even in cases where journalists and engaged activists exist and take part in online conversation, bots can work to support these efforts and in some cases surpass them in supplying and processing information.

    Bots and autonomous systems can also be used in reverse, to shield users against the emergent group behaviors on social media which work to dismantle productive discourse. James Poulos of the Daily Beast highlighted these sorts of programs in an article written in support of Block Bot. His argument is that this bot helps users to “see how ‘breaking down boundaries’ isn’t the panacea our creative and optimistic culture so often claims it to be.” Rather, Poulos suggests, the same software used to proliferate spam and manipulate public opinion can be used to limit people’s exposure to toxic, often hateful and abusive, speech online.

    It may become necessary to deploy these technologies. Twitter, Facebook, and others are unlikely to take aggressive and comprehensive actions to resolve issues like harassment and the emergence of echo chambers on their platforms. Despite having the most control over their respective platforms, taking action on these issues would force platforms to wade into the messy politics of playing referee in controversies. By maintaining a position of “neutrality” (some would argue negligence), the responsibility—and blame—continues to rest on users, and not on platforms.

    Moreover, as bad actors become more effective with using bots to shape social activity online, the need for “good bots” may become ever more important. Online social movements may be able to combat bots manually when they are obvious and spam messages in predictable ways. They may not be so successful when swarms of realistic looking identities may be used to conduct long term and subtle campaigns of infiltration in the future.

    It is important not to slip into the complacent cocoon of solutionism with this line of pro-bot argument. As some commentators have worried, “good bots” can look like spam and actually erode the social capital of burgeoning movements online. Automation is powerful. Like the deployment of robots in the physical world, the most effective uses will come with careful study and smart designs that are sensitive to the needs and perceptions of communities.

    Bots which declare their purpose and that they are bots, for instance, would add a layer of transparency that better sets the expectations of the communities they interact with. Bots also might be designed as a kind of community scaffolding, prodding and encouraging when a movement is small, but then deactivating gracefully as people rally to a cause to avoid introducing spam into a conversation.

    The failure of the “good bot” is a failure of design, not a failure of automation. Our discourse would be more productive if it focused on the qualities that make bots the right tool for the job from a social and ethical standpoint, rather than ceding the promise of this technology to those who would use them for ill.

    Samuel Woolley is the program manager of the “Political Bots” Project, a fellow at the Center for Media, Data and Society at Central European University, and a Ph.D. student in the Department of Communication at the University of Washington. He is based in Seattle and can be reached at samwooll@uw.edu and on Twitter @samuelwoolley.

    Tim Hwang directs Intelligence and Autonomy at Data & Society, a research initiative addressing the cross-arena challenges of policymaking around intelligent systems. He is based in San Francisco and can be reached by e-mail at tim@datasociety.net and on Twitter @timhwang.