There is abundant anecdotal evidence that nondemocratic regimes are harnessing new digital technologies known as social media bots to facilitate policy goals. However, few previous attempts have been made to systematically analyze the use of bots that are aimed at a domestic audience in autocratic regimes. We develop two alternative theoretical frameworks for predicting the use of pro-regime bots: one which focuses on bot deployment in response to offline protest and the other in response to online protest. We then test the empirical implications of these frameworks with an original collection of Twitter data generated by Russian pro-government bots. We find that the online opposition activities produce stronger reactions from bots than offline protests. Our results provide a lower bound on the effects of bots on the Russian Twittersphere and highlight the importance of bot detection for the study of political communication on social media in nondemocratic regimes.
Under electoral authoritarianism opposition supporters often abstain from voting because they think that their votes will not make a difference. Opposition parties try to counteract this apathy by informational campaigns that stress how voting can impact the outcome of the election and policy. Evidence from established democracies suggests that such campaigns are generally ineffective, but it remains an open question whether the same holds in elections under authoritarianism where information is scarce. We follow a large-scale campaign experiment by an opposition candidate in Russia’s 2016 parliamentary election, which distributed 240,000 fliers to 75% of the district’s households. Relative to a control flier, priming voters about the closeness of the election or the link between voting and policy outcomes had no practically meaningful impact on turnout or votes. Contrary to some existing theories and the stated expectations of politicians, information about the value of voting appears as ineffective in uncompetitive electoral autocracies as it is in democracies.
Computational propaganda and the use of automated accounts in social media have recently become the focus of public attention, with alleged Russian government activities abroad provoking particularly widespread interest. However, even in the Russian domestic context, where anecdotal evidence of state activity online goes back almost a decade, no public systematic attempt has been made to dissect the population of Russian social media bots by their political orientation. We address this gap by developing a deep neural network classifier that separates pro-regime, anti-regime, and neutral Russian Twitter bots. Our method relies on supervised machine learning and a new large set of labeled accounts, rather than externally obtained account affiliations or orientation of elites. We also illustrate the use of our method by applying it to bots operating in Russian political Twitter from 2015 to 2017 and show that both pro- and anti-Kremlin bots had a substantial presence on Twitter.
The question of how regimes respond to online opposition has become a central topic of politics in the digital age. In previous work, we have identified three main options for regimes: offline response, online restriction of access to content, and online engagement. In order to better understand the latter category—efforts to shape the online conversation—in Russia, we study the political use of Russian language Twitter bots using a large collection of tweets about Russian politics from 2014–2018. To do so, we have developed machine learning methods to both identify whether a Twitter account is likely to be a bot—that is, an account that automatically produces content based on algorithmic guidelines—as well as the political orientation of that bot in the Russian context (pro-Kremlin, pro-opposition, pro-Kyiv, or neutral).
We find that despite public discussion that has largely focused on the actions of pro-Kremlin bots, the other three categories are also quite active. Interestingly, we find that pro-Kremlin bots are slightly younger than either pro-opposition or pro-Kyiv bots, and that they were more active than the other types of bots during the period of high Russian involvement in the Ukrainian crisis in 2014. We also characterize the activity of these bots, finding that all of the political bots are much more likely to retweet content produced by other accounts than the neutral bots. However, neutral bots are more likely to produce tweets that have identical content to those produced by other bots. Finally, we use network analysis to illustrate that the sources of retweets from Russian political bots are mass media and active Twitter users whose political leanings correspond to bots’ political orientation. This provides additional evidence in support of the claim that bots are mostly used as amplifiers for political messages.
Digital propaganda of the Russian government seeks to insulate Putin’s leadership from any domestic challengers and aid in his foreign policy ventures, which increasingly set Russian interests off against the West. Yet the propaganda tools, including trolls and bots, were conceived and perfected in the pockets of political competition and a globally integrated market economy still left in Putin’s Russia. I discuss how the vibrant Russian blogosphere, left unattended by the government and laser-focused on taking over the traditional media, created the demand for sophisticated online propaganda and censorship tools. I also discuss how the advanced Russian online media and tech sector helped to meet this demand. I conclude with a preliminary report on the detection and exposure of government propaganda online, which could be applicable beyond Russia.
We introduce a novel classification of strategies employed by autocrats to combat online opposition generally, and opposition on social media in particular. Our classification distinguishes both online from offline responses and censorship from engaging in opinion formation. For each of the three options — offline action, technical restrictions on access to content, and online engagement — we provide a detailed account for the evolution of Russian government strategy since 2000. To illustrate the feasibility of researching online engagement, we construct and assess tools for detecting the activity of political “bots,” or algorithmically controlled accounts, on Russian political Twitter, and test these methods on a large dataset of politically relevant Twitter data from Russia gathered over a year and a half.
This review is divided into three parts. We begin with addressing the primary tactics for spreading disinformation online: censorship; hacking and sharing; the manipulation of search rankings; and the use of bots and trolls to directly share information. In the second section, we summarize the current state of the ever growing literature on what we know about how bots and trolls have been employed in the disinformation sphere, as well as providing a short technical discussion of the current state of bot and troll detection techniques. In the final section, we look at some of the underlying characteristics that make social media platforms inherently susceptible to disinformation campaigns, namely the dependence on ad revenue and the use of optimization algorithms.
Automated and semiautomated Twitter accounts, bots, have recently gained significant public attention due to their potential interference in the political realm. In this study, we develop a methodology for detecting bots on Twitter using an ensemble of classifiers and apply it to study bot activity within political discussions in the Russian Twittersphere. We focus on the interval from February 2014 to December 2015, an especially consequential period in Russian politics. Among accounts actively Tweeting about Russian politics, we find that on the majority of days, the proportion of Tweets produced by bots exceeds 50%. We reveal bot characteristics that distinguish them from humans in this corpus, and find that the software platform used for Tweeting is among the best predictors of bots. Finally, we find suggestive evidence that one prominent activity that bots were involved in on Russian political Twitter is the spread of news stories and promotion of media who produce them.
We appreciate the contributions made by the two commentarists on our paper (Alekseeva, 2010; Sundstrom, 2011) to the understanding of the trends in the Russian third sector. We believe that this brief discussion has simultaneously revealed the common ground of the different views on the Russian third sector—and clarified the important disagreements and open questions in its study. In addition, we hail the multilinear approach to the broad domain of the Russian third sector which is presented in the commentaries.
Our paper ‘The Changing Models of the Russian Third Sector: Import Substitution Phase’ (Jakobson & Sanovich, 2010) presented a theoretical framework for the appraisal of the third sector’s evolution in Russia. Looking at the demand for and supply of key resources and institutions of the third sector in the late USSR and Russia, we identified three consecutive models of the Russian third sector: latent growth in the late USSR, import-dependent in the 1990s, and the rooted model, in place since the beginning of 2000s. All of them were analysed in four dimensions: developmental driving forces, sector structure, dominant organizational culture, and relations with the state.
Following the chronological organization of our original paper and both of the commentaries, we start with the issues related to the latent growth model and import-dependent model. Then we will look at the current rooted model in greater detail and address the most interesting question, raised in both commentaries, about the relationship between third sector development, the role of the middle class, and democracy. Finally, we will give a brief note on the latest trends in the Russian third sector and broader civil activity and outline several open questions for further investigation.
This article offers and evaluates a theoretical framework for the appraisal of the third sector’s evolution in Russia. Its history in the preceding 50 years is presented as a successive change of three models — latent growth, import-dependent and rooted — each regarded in four dimensions: developmental driving forces, sector structure, dominant organizational culture and relations with the state. The character and change of models are explained proceeding from the demand/supply characteristics of resources and institutions of the sector. Major attention belongs to the rooted model, which is presently taking shape. This versatile and problem-laden process is analysed on the basis of civil society monitoring conducted with the authors’ participation since 2006. This analysis reveals rather intensive import substitution of the resources and institutions of the sector and the emergence of prerequisites for its sustainable development. Their implementation depends, however, on the state of the economic, social and cultural environment and requires elimination of some political obstacles.