Russian Social Media Disruption Report
If you’ve participated at all in comments online over the past year, the certainty is near 100% that you’ve seen other people called or have been called yourself, a “troll,” “shill,” or maybe even a <gasp> “Russian.”
Accusations like these are rampant online, as is the paranoia which fosters them, thanks in no small part to a cloud of sensationalist media coverage and our seemingly intrinsic need to find bad guys lurking around every corner…
Part 1: Disrupting democracy
Showtime’s most recent season of Homeland — season 6, episode 9 (2017) — portrays a shadowy quasi-governmental, private tech startup called the Office of Policy Coordination. Located six floors underground in a nondescript office building outside Washington, DC, the company is found to be responsible for secretly running a massive army of phony sock-puppet accounts across social media, posing as ordinary people in order to advance a nefarious political agenda.
Here’s a two minute clip for reference:
Airing originally in March of this year, the subplot is obviously inspired by events which transpired in cyberspace around the 2016 U.S. presidential election (along with Brexit, and possibly others), where malicious state-sponsored actors allegedly attempted to disrupt the democratic process.
We know the real world analogue of Homeland’s fictional Office of Policy Coordination to be the now infamous Internet Research Agency, or as they’re sometimes called in the media, the ‘Trolls from Olgino.’
Given the confusing, conflicting, and convoluted information out there about this alleged Russian interference, I took it upon myself to do the only logical thing any normal person would do: make a Carrie Mathison-style “crazy wall” inside my shed next to my chicken coop to try and sort it all out.
Okay, sure, it’s not quite as crazy as Carrie’s bipolar-driven Abu Nazir wall, but it’s my first time exteriorizing my own inner crazy wall. So cut me some slack. I had to start somewhere. And I can definitely say: the process was not only extremely useful in developing my understanding, but also oddly very therapeutic.
Part 2: Persona Management Software Systems
In the subsequent Homeland episode (s06e10), Carrie’s friend and accomplice Max (Maury Sterling) states: “I’ve heard rumors of social media boiler rooms like this in Russia and in China, but not here. And definitely not on this scale.”
I don’t want to tv-splain too much because I know this is just drama, but based on my research into the subject — using all open source, publicly available information, which I’ve documented with a near religious zeal over the past three weeks — Max’s statement overlooks some important facts which are likely to be known by those working IRL in the security and intelligence fields.
Namely, that in 2010, the U.S. Air Force posted a solicitation to build what amounts to exactly the type of sock-puppet app portrayed in Homeland. Or as they called it on the Federal Business Opportunities website, Persona Management Software (fbo.gov, reproduced on Archive.org, June 2010).
It is, essentially, a social media and propaganda battle-station. From the solicitation:
“Software will allow 10 personas per user, replete with background , history, supporting details, and cyber presences that are technically, culturally and geographacilly [sic] consistent. Individual applications will enable an operator to exercise a number of different online persons from the same workstation and without fear of being discovered by sophisticated adversaries. Personas must be able to appear to originate in nearly any part of the world and can interact through conventional online services and social media platforms. The service includes a user friendly application environment to maximize the user’s situational awareness by displaying real-time local information.”
Through a combination of VPNs, untraceable IPs, and traffic routed through regional proxies, such a service would enable mass identity-spoofing, using persistent personas, each of which has a detailed personal and social media character history for complete verisimilitude.
So, how can we know if they are really “Russian?”
Well, you tell me — can we? All we can do is try to piece together the 👣 🔍 clues.
Though another company was ultimately awarded the contract (Ntrepid), there was a very relevant document leak by Anonymous from a security contractor called HB Gary Federal in 2011, in which that company’s own vision for such a persona management system was fleshed out in detail.
“For this purpose we custom developed either virtual machines or thumb drives for each persona. This allowed the human actor to open a virtual machine or thumb drive with an associated persona and have all the appropriate email accounts, associations, web pages, social media accounts, etc. pre-established and configured with visual cues to remind the actor which persona he/she is using so as not to accidentally cross-contaminate personas during use.” …
“These accounts are maintained and updated automatically through RSS feeds, retweets, and linking together social media commenting between platforms. With a pool of these accounts to choose from, once you have a real name persona you create a Facebook and LinkedIn account using the given name, lock those accounts down and link these accounts to a selected # of previously created social media accounts, automatically pre-aging the real accounts.”
The proposal goes on to describe various “character levels” within their system, based on utility and level of content development:
- Level 0: Quick use, no background persona required.
- Level 1: Slightly more fleshed out, with multiple accounts across different services correlated to one another, with privacy set to high on accounts so as not to disclose too much information publicly.
- Level 2: More detailed persistent persona with background; fleshed out with blend of automated and human-generated content history.
- Level 3: Most detailed, developed and realistic; capable of having human-to-human (online) interactions, with multiple correlated social accounts and a realistic personal, and professional background if needed.
We can assume with a high degree of certainty, that if such advanced persona management software systems have been under development since at least 2010, that they have very probably advanced somewhat in the seven years which have passed since. To say the least…
Are they at the level of what’s depicted in Homeland’s “Sock Puppets” episode?
Hard to say —without penetrating the secret offices alleged to be using them!
Part 3: Government manipulation of social media
Whether or not our television fantasies here hew close to actual reality — and Americans have been or are currently intentionally manipulated by secret factions in the United States (e.g., the “Deep State”) — a recent report by Freedom House, a US government-sponsored NGO, announced evidence that governments of some 30 countries currently use astro-turfing techniques to manipulate opinion on social media.
For the most part, the operations of these covert cyber troops are said to have a domestic-focus, with the notable exceptions of Russian interference in the 2016 United States presidential election, Brexit, also likely the French and German presidential campaigns, and more recently around the Spanish independence push in Catalonia.
But the story with regards to Russia goes deeper than that…
Much, much deeper.
Reports from inside the troll farm
Over the past several years, operational details from inside the Internet Research Agency have been provided by a series of leaks from former employees, infiltrations by journalists, and break-ins by hacktivists.
- Ex-IRA employee Alan Baskaev described to The Daily Beast in October 2017, an outrageous work environment, in which (among other things) the organization allegedly produced a fake Hillary Clinton sex tape intended to go viral.
- Russian media site RBC.ru published in October 2017 a Russian-language expose of the IRA, which has become something of a canonical source in online discussions of the topic (I used Google Chrome auto-translate extension to read it). Some useful context on RBC: their offices were raided by the Russian government in 2016 after publishing documents from the Panama Papers, connecting Putin’s son-in-law to offshore assets, and ending in the sacking of their then editor-in-chief, and mass resignation of significant portion of their journalistic staff. RBC was owned until June 2017 by billionaire Mikhail Prokhorov — owner of the Brooklyn Nets basketball team, and failed 2012 presidential election opponent to Putin.
- Collaborating with Adrian Chen of the NY Times in his seminal June 2015 article, “The Agency,” environmental activist Ludmila Savchuk took a job with the IRA, documented and leaked information to the public describing the organization’s internal structure and techniques. As in the USAF and HB Gary documents, we learn that agency employees used VPNs to mask their location while propagating through phony social media accounts propaganda talking points, keywords and targets provided by daily technical task sheets.
- Some of Savchuk’s original leaked documents can be seen in this Mr7.ru March 2015 Russian-language article.
- In 2014, a Russian hacker collective called Shaltai Boltai (“Humpty Dumpty”) and known in English as Anonymous International, leaked a trove of internal Internet Research Agency documents which Buzzfeed discusses in this June 2014 article. The group’s documents indicated payments to the IRA from a holding company called Concord, owned by restaurateur and Putin confidant Evgeny Prigozhin. (Prigozhin is currently under sanction by the U.S. Treasury Department.) In addition to extensive infiltration of Russian and Ukrainian media sites, IRA employees are said to have carefully studied comment systems of major platforms and news outlets around the world, and tested content enforcement policies by moderators, to ensure their work wouldn’t be banned. Their leaked documents and blog site were shortly thereafter blocked in Russia by Roskomnadzor, the federal media oversight agency, as theft of personal data. The site and some of the (Russian language) documents are mirrored here.
- In September 2013, Russian newspaper Novaya Gazeta infiltrated the organization and published a report, which included names of key personnel, such as Alexey Soskovets. Soskovets is said to have extensive contacts and involvement with Nashi and other youth groups, as well as business ties to ruling party United Russia’s Camp Seliger, the activities at which the L.A. Times described in a 2011 article:
“…thousands of young men and women are learning how to be supporters of the ruling United Russia party, future politicians and senior government officials. […]
These young people are taught to open up accounts in all social networks, make as many friends as possible and thus spread information with maximum efficiency,” explained Vasily Yakemenko, founder of the Nashi youth group and head of the Federal Agency for Youth Affairs that runs the camp.”
- Also from the 2013 Novaya Gazeta reporting, we learn that Soskovets’ own North-Western Service Agency was seeking employees to open up offices similar to the Internet Research Agency in Moscow and other cities. It is unknown how many other organizations like the IRA are in operation. Soskovets in that article discusses humans being used in place of bots, because they are much more difficult to detect than bots, which platforms are able to find and suspend easily.
Nashi leaks of 2012
Though not specifically linked to the IRA, the Nashi youth movement leaks of 2012 (which appeared just before Putin’s challenging but successful 2012 re-election for a controversial third term) provide supplemental evidence of quasi-governmental youth organizations orchestrating prototypical astro-turfing and media manipulation campaigns, as well as pro-government counter-protests. Exactly like the techniques which have been documented above by the IRA, both on and offline, but engaged at the time in embryonic form against Russian mass anti-election fraud protests of 2011–2013 and events in the Ukraine.
We see echoes in BBC reporting from March 2012 of the types of attacks which came to be common place years later during the U.S. presidential election:
“These bots succeeded in blocking the actual message feed with that hashtag,” he wrote.
The rate at which pro-government messages were posted, about 10 per second, suggests they were being done automatically rather than by individuals…”
The facts about the Internet Research Agency
Via the above sources, we can determine a few key facts which can be used to track and organize our data.
- It has held at least two different addresses, both in St. Petersburg: starting sometime in 2013, at 131 Lakhtinsky Prospekt (Olgino district), and moving probably in 2014 to a larger office with more staff at 55 Savushkina.
- Also referenced as sharing this address is an organization called FAN, or Federal News Agency (which Adrian Chen goes into more in his NYT 2015 piece), as well as People’s News, and potentially others which seem to cooperate to some extent in at least aggregating one another’s stories.
- Outside of this, what we might call “facts” reported vary pretty widely. Though all seem to agree more or less on the overall structure and work carried out by the Agency, numbers of staff range anywhere from 50 up to 900 at different times, and according to different services.
- Paid at wages well above area norms, participants worked as “internet operators,” fulfilling in 12 hour shifts content quotas which varied depending on the section they worked in: whether they were lower-level social media commentators, or more full-fledged bloggers, or worked on other kinds of content such as video.
- Wired in September 2017 reported that the Internet Research Agency was supposedly officially disbanded in approximately 2015 (presumably due to bad press), and re-named Glavset, but operates still out of the same address.
A leaked IRA employee list (in Russian) is reproduced here for reference (source I believe is Savchuk leak).
Moscow Information Technologies
Last but not least, as further proof the knowledge and technology to pull off these types of online campaigns is alive and well in Russia, we turn to the case of Moscow Information Technologies, an IT group which supports the Mayor of Moscow.
Anonymous International/Shaltai Boltai also in 2014 leaked some emails between media outlets and government-linked Moscow Information Technologies which worked with Mayor Sobyanin to manipulate public opinion about his administration. Among many other activities, Moscow Times reported in May 2017:
“Sobyanin’s administration heavily invests in swaying the agenda on Yandex.News, Russia’s biggest online news aggregator.
“MIT devised a scheme wherein Moscow’s neighborhood councils (most of them totally loyal to the mayor and to United Russia) set up dozens of similar news websites that are capable of firing off volleys of nearly identical news articles promoting the mayor’s initiatives. This onslaught fools Yandex’s algorithm into thinking that something important is happening. The news aggregator doesn’t differentiate between the sources, and thus assumes there’s a news event that deserves top billing in its ranking system, if hundreds of different outlets are reporting on a single event.”
Part 4: Fake news rings
The tactics described by ex-employees of the Internet Research Agency, combined with other leaks relating to Nashi, and those above by Moscow Information Technologies seem to paint a technical picture which just so happens to mesh handily with fake news endeavors around the world, particularly those famously run out of Macedonia.
The Guardian in July 2017 suggested Robert Mueller was looking into possible ties between these types of fake news sites, to Russian and far-right websites in the United States leading up to the election. Quoting from that article:
“Mattes, a former Senate investigator, did some digging into the sudden phenomenon of eastern European Sanders enthusiasts. He found a spike in activity on the anonymous browsing tool Tor in Macedonia that coincided with the launch of the fake news campaign, which he believes could represent Russian handlers contacting potential east European hosts to help them set up automated websites.”
“He has also found a high degree of apparent coordination in the dissemination of fake news between official Russian propaganda outlets and “alt-right” sites in the US.
“They synchronise so quickly it looks as if they know when a particularly story was going to come out,” he added. “And they all parrot the Kremlin narrative.”
Rolling Stone reporting in November 2017 suggests that Macedonian fake news sites were often sourcing material from U.S. based website Breitbart:
“When I traveled to Macedonia last summer, Borce Pejcev, a computer programmer who has set up dozens of fake-news sites — for around 100 euros each — said it wasn’t quite that simple. Macedonians don’t invent fake news stories, he told me. “No one here knows anything about American politics. They copy and paste from American sites, maybe try to come up with more dramatic headline.” Fox News, TruePundit.com, DailyCaller.com, InfoWars and Breitbart, he said, were among the Macedonians’ most common source material (“Breit-bart was best”).”
Another NY Times article from September 2017 explains how Breitbart’s Stephen Bannon latched onto false news and rumor-mongering out of Twin Falls Idaho, the so-called Fawnbrook incident:
“The Twin Falls story aligned perfectly with the ideology that Stephen Bannon, then the head of Breitbart News, had been developing for years, about the havoc brought on by unchecked immigration and Islamism, all of it backed by big-business interests and establishment politicians. Bannon latched onto the Fawnbrook case and used his influence to expand its reach.”
“Other conservative content farms, including WorldNetDaily, maintained ties to the Trump election effort. Campaign finance records show that Great America PAC, a Trump-backing Super PAC, paid WND, known as the largest purveyor of Obama birth certificate conspiracy theories, for “online voter contact.”
At the end of the day, whether all of the above are somehow coordinated, or if it’s just a coincidence is a moot point since the end effect is largely the same.
Part 5: Micro-targeting
CNN, in September 2017 asked an important question regarding Russia-linked IRA Facebook ad buys targeting Baltimore and Ferguson:
“Senator Mark Warner, the top-ranking Democrat on the Senate Intelligence Committee, said Tuesday that the “million-dollar question” about the Facebook ads centered on how the Russians knew whom to target.”
Speculations are of course rife regarding the nature and connections between the Trump campaign, which was obviously served by disinformation and trolling campaigns, and agents of the Russian government. Did the Russians know which voters in which states to concentrate their efforts on? And if so, how exactly did they get this data? (And at the end of the day, does anybody even really care?)
Though the link is for now tenuous, one avenue of official investigation has gone after the potential role of big data company, Cambridge Analytica, which first worked on Ted Cruz’s campaign, later on Trump’s, and which may or may not have worked on Brexit. Incidentally, Breitbart’s Bannon was at one time VP of Cambridge Analytica, and held between a $1 and $5M stake in the company.
Here’s a video with a bit more info about CA’s methodology of micro-targeting individual voters based on psychological profile and tailoring campaign messaging directly to them:
Other likely suspects within the Trump administration appear to be, variously, Jared Kushner and Brad Parscale who worked on the data operation for the campaign. As well as Michael Flynn, who worked in a brief advisory role for Cambridge Analytica.
(See also: Correct the Record, Hillary PAC which used astro-turfing techniques)
Internet monitoring in Russia
Of course, the Russians may not have needed any outside help when it comes to monitoring internet activity. Since 2011, the Russian government has cracked-down hard on internet freedoms. For starters, all ISPs in Russia are required by the government to run a system called SORM (Wikipedia) which the Federal Security Service can use to access web traffic:
“It allow[s] the agency to unilaterally monitor users’ communications metadata and content, including phone calls, email traffic and web browsing activity. […] In 2014, the system was expanded to include social media platforms…”
Though it is mysteriously unavailable at the time of this writing, we also have an interesting solicitation by the Russian government from 2014 for monitoring software partly entitled (auto-translation), “automatic selection of media information, studying the information field, monitoring blogs and social media.”
On this, iz.ru published in January 2014 a description:
“Information materials will be preliminarily processed, they will be grouped on specific topics: the president, the administration of the president’s administration, the prime minister, opposition protests, governors, negative events in the country, incidents, criticism of the authorities.”
Part 6: Signals, indicators & detection
Facebook just announced that by the end of the year, they will offer a tool for users to see if they liked or followed accounts or pages linked to the Internet Research Agency. According to their written testimony before the Senate Select Intelligence Committee and an official blog post, Facebook said they have identified and suspended 470 accounts or pages. Twitter testified as to having identified and suspended with the help of third-party information some 2,752 accounts (full list).
Without having access the technical data which those platforms must have, we can speculate with a high degree of probability what signals and indicators Facebook, Twitter and Google must be able to use to identify potential malicious state-sponsored accounts:
- IP (geolocation) — made unreliable by VPNs, of course.
- Currency used for transactions — can be faked as well.
- Credit card / bank account country.
- Phone number (area code).
- Shared Google Analytics ID across multiple sites.
- Shared username/email across platforms
- Characters unique to language
- Shared domain host IP
- Outbound links
- Inbound links
- Timezone activity
- Photo EXIF data
- Reverse-searching images
- Shared activity patterns across accounts
- Network connections in common on social media
- Language cues: odd English vocabulary, construction, punctuation
- Known talking points (themes) associated with
- Hashes of known bad content
- Third-party information sources
- Statistical analysis of search frequency
Part 7: Key takeaways
- Making crazy walls is super fun. It’s easy to get carried away — so look for the facts.
- All of the Russian people I’ve met are very nice! To me, this is not really a story about any one country at all… the implications are global and actors inter-changeable in the middle and end-games.
- Information warfare is clearly, definitely, now ‘a thing.’ Anyone can pretend to be anyone else or anywhere.
- “Lots of governments are doing it.”
- Russian media outlet Vedomosti said in May 2014 that the techniques pioneered by the Russian government proved to be so successful at home after the mass protests that they exported them to the European and American markets.
- Vladimir Putin has long maintained that the internet is a CIA ploy, as an excuse to enforce ever-tighter controls over the technology. He also claims color revolutions, mass protests against the Russian government (as well as the Arab Spring) were orchestrated by foreign actors.
- I haven’t gone down the 🐇 🕳 of whether Putin’s claims are true, but the development of such tools around 2010–2011 in the United States for use against foreign targets is certainly an interesting correlation.
- Based on my research, there is a stunning lack of original reporting available on these topics which are of potentially grave international importance.
- News outlets — even major “reputable” ones — seem to just be reporting on one another’s reporting. It’s a hall of mirrors all the way down. And it’s not just on this topic: it’s the whole news ecosystem.
- Fake news and so-called ‘meme warfare’ aren’t some accident of our post-modern mainstream media, but the obvious through-line of technologies whose goal is to amorally propagate information regardless of quality or veracity.
- Fact-checking as a counter to misinformation, disinformation, propaganda and fake news is not a fool-proof process. It is made all the more difficult when there are very few, or only obscured sources available to the public. (See #6)
- I’m not crazy about what Wikileaks has done politically, but as a tool for organizing leaked documents for further research by members of the public, it fills its niche. Otherwise you end up chasing sketchily-hosted .zip files which are long since taken down…
- Wikipedia articles are as good as the sources they cite.
- Fact-TRACKING may ultimately prevail over fact-checking. That is, in a world of dwindling original sources, and an endless multitude of rip-offs and copies, perhaps there is an epidemiological approach that could be applied to tracking the origin and distribution of blocks of information (e.g., “facts,” factoids, sound-bites, or memes for that matter). Blockchain for news, anyone?
Part 8: In conclusion:
The best conclusion I think we could draw from this investigation is one I’ll borrow from Kester Ratcliff’s article on open source intelligence for beginners:
“The internet will continue to be a confusing information-psychological warzone until the networked-ness of information is made visible so that people can easily and instantly see where stuff’s coming from and who/ what it’s associated with and what effects their interacting with it may have.”
Strictly speaking, this isn’t a “Russia issue” at all. Any malicious actor could weaponize these vectors. It’s an information issue. And it’s here to stay until we do something about the entire system, not just the symptoms.
Until then, I’ll keep working on my crazy wall.
I have a feeling we’re going to need it…