Recent Announcements

1- (TODAY) The next war will be fought with BITS not bullets

posted May 23, 2019, 6:47 AM by WECB640   [ updated May 23, 2019, 6:53 AM ]

Cyberattacks are the newest frontier of war and can strike harder than a natural disaster. Here's why the US could struggle to cope if it got hit.

cyber attacks on us 4x3Attacks on infrastructure are becoming the new frontline of cyberthreats. Samantha Lee/Business Insider

2-Tech giants offer empty apologies because users can’t quit “Sorry” means nothing

posted Nov 26, 2018, 5:30 AM by WECB640   [ updated May 23, 2019, 6:52 AM ]

A true apology consists of a sincere acknowledgement of wrong-doing, a show of empathic remorse for why you wronged and the harm it caused, and a promise of restitution by improving ones actions to make things right. Without the follow-through, saying sorry isn’t an apology, it’s a hollow ploy for forgiveness.

That’s the kind of “sorry” we’re getting from tech giants — an attempt to quell bad PR and placate the afflicted, often without the systemic change necessary to prevent repeated problems. Sometimes it’s delivered in a blog post. Sometimes it’s in an executive apology tour of media interviews. But rarely is it in the form of change to the underlying structures of a business that caused the issue.

Intractable Revenue

Unfortunately, tech company business models often conflict with the way we wish they would act. We want more privacy but they thrive on targeting and personalization data. We want control of our attention but they subsist on stealing as much of it as possible with distraction while showing us ads. We want safe, ethically built devices that don’t spy on us but they make their margins by manufacturing them wherever’s cheap with questionable standards of labor and oversight. We want groundbreaking technologies to be responsibly applied, but juicy government contracts and the allure of China’s enormous population compromise their morals. And we want to stick to what we need and what’s best for us, but they monetize our craving for the latest status symbol or content through planned obsolescence and locking us into their platforms.

The result is that even if their leaders earnestly wanted to impart meaningful change to provide restitution for their wrongs, their hands are tied by entrenched business models and the short-term focus of the quarterly earnings cycle. They apologize and go right back to problematic behavior. The Washington Postrecently chronicled a dozen times Facebook CEO Mark Zuckerberg has apologized, yet the social network keeps experiencing fiasco after fiasco. Tech giants won’t improve enough on their own.

Addiction To Utility

The threat of us abandoning ship should theoretically hold the captains in line. But tech giants have evolved into fundamental utilities that many have a hard time imagining living without. How would you connect with friends? Find what you needed? Get work done? Spend your time? What hardware or software would you cuddle up with in the moments you feel lonely? We live our lives through tech, have become addicted to its utility, and fear the withdrawal.

If there were principled alternatives to switch to, perhaps we could hold the giants accountable. But the scalability, network effects, and aggregation of supply by distributors has led to near monopolies in these core utilities. The second-place solution is often distant. What’s the next best social network that serves as an identity and login platform that isn’t owned by Facebook? The next best premium mobile and PC maker behind Apple? The next best mobile operating system for the developing world beyond Google’s Android? The next best ecommerce hub that’s not Amazon? The next best search engine? Photo feed? Web hosting service? Global chat app? Spreadsheet?

Facebook  is still growing in the US & Canada despite the backlash, proving that tech users aren’t voting with their feet. And if not for a calculation methodology change, it would have added 1 million users in Europe this quarter too.

One of the few tech backlashes that led to real flight was #DeleteUber. Workplace discrimination, shady business protocols, exploitative pricing and more combined to spur the movement to ditch the ridehailing app. But what was different here is that US Uber users did have a principled alternative to switch to without much hassle: Lyft. The result was that “Lyft benefitted tremendously from Uber’s troubles in 2018” eMarketer’s forecasting director Shelleen Shum told the USA Today in May. Uber missed eMarketer’s projections while Lyft exceeded them, narrowing the gap between the car services. And meanwhile, Uber’s CEO stepped down as it tried to overhaul its internal policies.

This is why we need regulation that promotes competition by preventing massive mergers and giving users the right to interoperable data portability so they can easily switch away from companies that treat them poorly

But in the absence of viable alternatives to the giants, leaving these mainstays is inconvenient. After all, they’re the ones that made us practically allergic to friction. Even after massive scandals, data breaches, toxic cultures, and unfair practices, we largely stick with them to avoid the uncertainty of life without them. Even Facebook added 1 million monthly users in the US and Canada last quarter despite seemingly every possible source of unrest. Tech users are not voting with their feet. We’ve proven we can harbor ill will towards the giants while begrudgingly buying and using their products. Our leverage to improve their behavior is vastly weakened by our loyalty.

Inadequate Oversight

Regulators have failed to adequately step up either. This year’s congressional hearings about Facebook and social media often devolved into inane and uninformed questioning like how does Facebook earn money if its doesn’t charge? “Senator, we run ads” Facebook CEO Mark Zuckerberg said with a smirk. Other times, politicians were so intent on scoring partisan points by grandstanding or advancing conspiracy theories about bias that they were unable to make any real progress. A recent survey commissioned by Axiosfound that “In the past year, there has been a 15-point spike in the number of people who fear the federal government won’t do enough to regulate big tech companies — with 55% now sharing this concern.”

When regulators do step in, their attempts can backfire. GDPR was supposed to help tamp down on the dominance of Google and Facebook by limiting how they could collect user data and making them more transparent. But the high cost of compliance simply hindered smaller players or drove them out of the market while the giants had ample cash to spend on jumping through government hoops. Google actually gained ad tech market share and Facebook saw the littlest loss while smaller ad tech firms lost 20 or 30 percent of their business.

Europe’s GDPR privacy regulations backfired, reinforcing Google  and Facebook’s dominance. Chart via Ghostery, Cliqz, and WhoTracksMe.

Even the Honest Ads act, which was designed to bring political campaign transparency to internet platforms following election interference in 2016, has yet to be passed even despite support from Facebook and Twitter. There’s hasn’t been meaningful discussion of blocking social networks from acquiring their competitors in the future, let alone actually breaking Instagram and WhatsApp off of Facebook. Governments like the U.K. that just forcibly seized documents related to Facebook’s machinations surrounding the Cambridge Analytica debacle provide some indication of willpower. But clumsy regulation could deepen the moats of the incumbents, and prevent disruptors from gaining a foothold. We can’t depend on regulators to sufficiently protect us from tech giants right now.

Our Hope On The Inside

The best bet for change will come from the rank and file of these monolithic companies. With the war for talent raging, rock star employees able to have huge impact on products, and compensation costs to keep them around rising, tech giants are vulnerable to the opinions of their own staff. It’s simply too expensive and disjointing to have to recruit new high-skilled workers to replace those that flee.

Google declined to renew a contract with the government after 4000 employees petitioned and a few resigned over Project Maven’s artificial intelligence being used to target lethal drone strikes. Change can even flow across company lines. Many tech giants including Facebook and Airbnb have removed their forced arbitration rules for harassment disputes after Google did the same in response to 20,000 of its employees walking out in protest.

Thousands of Google employees protested the company’s handling of sexual harassment and misconduct allegations on Nov. 1.

Facebook is desperately pushing an internal communications campaign to reassure staffers it’s improving in the wake of damning press reports from the New York Times and others. TechCrunch published an internal memo from Facebook’s outgoing VP of communications Elliot Schrage in which he took the blame for recent issues, encouraged employees to avoid finger-pointing, and COO Sheryl Sandberg tried to reassure employees that “I know this has been a distraction at a time when you’re all working hard to close out the year — and I am sorry.” These internal apologizes could come with much more contrition and real change than those paraded for the public.

And so after years of us relying on these tech workers to build the product we use every day, we must now rely that will save us from them. It’s a weighty responsibility to move their talents where the impact is positive, or commit to standing up against the business imperatives of their employers. We as the public and media must in turn celebrate when they do what’s right for society, even when it reduces value for shareholders. If apps abuse us or unduly rob us of our attention, we need to stay off of them.

And we must accept that shaping the future for the collective good may be inconvenient for the individual. There’s an oppprtunity here not just to complain or wish, but to build a social movement that holds tech giants accountable for delivering the change they’ve promised over and over.

BB - Twitter Shadow Banning

posted Jan 11, 2018, 10:18 AM by WECB640   [ updated Jan 11, 2018, 10:23 AM ]


UNDERCOVER VIDEO: Twitter Engineers To “Ban a Way of Talking” Through “Shadow Banning,” Algorithms to Censor Opposing Political Opinions

Steven Pierre, Twitter engineer explains “shadow banning,” says “it’s going to ban a way of talking”
Former Twitter software engineer Abhinav Vadrevu on shadow banning: “they just think that no one is engaging with their content, when in reality, no one is seeing it”
Former Twitter Content Review Agent Mo Norai explains banning process: “if it was a pro-Trump thing and I’m anti-Trump… I banned his whole account… it’s at your discretion”
When asked if banning process was an unwritten rule, Norai adds “Very. A lot of unwritten rules… It was never written it was more said”
Olinda Hassan, Policy Manager for Twitter Trust and Safety explains, “we’re trying to ‘down rank’… shitty people to not show up,” “we’re working [that] on right now”
“Shadow banning” to be used to stealthily target political views- former Twitter engineer says, “that’s a thing”
Censorship of certain political viewpoints to be automated via “machine learning” according to Twitter software engineer
Parnay Singh, Twitter Direct Messaging Engineer, on machine learning algorithms, “you have like five thousand keywords to describe a redneck…” “the majority of it are for Republicans”

(San Francisco) In the latest undercover Project Veritas video investigation, current and former Twitter employees are on camera explaining steps the social media giant is taking to censor political content that they don’t like.

This video release follows the first undercover Twitter exposé Project Veritas released on January 10th which showed Twitter Senior Network Security Engineer Clay Haynes saying that Twitter is “more than happy to help the Department of Justice with their little [President Donald Trump] investigation.” Twitter responded to the video with a statement shortly after that release, stating “the individual depicted in this video was speaking in a personal capacity and does not represent of speak for Twitter.” The video released by Project Veritas today features eight employees, and a Project Veritas spokesman said there are more videos featuring additional employees coming.

On January 3rd 2018 at a San Francisco restaurant, Abhinov Vadrevu, a former Twitter Software Engineer explains a strategy, called “shadow banning,” that to his knowledge, Twitter has employed:

“One strategy is to shadow ban so you have ultimate control. The idea of a shadow ban is that you ban someone but they don’t know they’ve been banned, because they keep posting and no one sees their content. So they just think that no one is engaging with their content, when in reality, no one is seeing it.”

Twitter is in the process of automating censorship and banning, says Twitter Software Engineer Steven Pierre on December 8th of 2017:

“Every single conversation is going to be rated by a machine and the machine is going to say whether or not it’s a positive thing or a negative thing. And whether it’s positive or negative doesn’t (inaudible), it’s more like if somebody’s being aggressive or not. Right? Somebody’s just cursing at somebody, whatever, whatever. They may have point, but it will just vanish… It’s not going to ban the mindset, it’s going to ban, like, a way of talking.”

Olinda Hassan, a Policy Manager for Twitter’s Trust and Safety team explains on December 15th, 2017 at a Twitter holiday party that the development of a system of “down ranking” “shitty people” is in the works:

“Yeah. That’s something we’re working on. It’s something we’re working on. We’re trying to get the shitty people to not show up. It’s a product thing we’re working on right now.”

Former Twitter Engineer Conrado Miranda confirms on December 1st, 2017 that tools are already in place to censor pro-Trump or conservative content on the platform. When asked whether or not these capabilities exist, Miranda says, “that’s a thing.”

Unlike the tech companies, we aren't owned by billionaires.
Unlike the mainstream media, we don't have major Wall Street advertisers.
Please support our work with a small give today.

In a conversation with former Twitter Content Review Agent Mo Norai on May 16th, 2017, we learned that in the past Twitter would manually ban or censor Pro-Trump or conservative content. When asked about the process of banning accounts, Norai said, “On stuff like that it was more discretion on your view point, I guess how you felt about a particular matter…”

When asked to clarify if that process was automated Norai confirmed that it was not:

“Yeah, if they said this is: ‘Pro-Trump’ I don’t want it because it offends me, this, that. And I say I banned this whole thing, and it goes over here and they are like, ‘Oh you know what? I don’t like it too. You know what? Mo’s right, let’s go, let’s carry on, what’s next?'”

Norai also revealed that more left-leaning content would go through their selection process with less political scrutiny, “It would come through checked and then I would be like ‘Oh you know what? This is okay. Let it go.’”

Norai explains that this selection process wasn’t exactly Twitter policy, but rather they were following unwritten rules from the top:

“A lot of unwritten rules, and being that we’re in San Francisco, we’re in California, very liberal, a very blue state. You had to be… I mean as a company you can’t really say it because it would make you look bad, but behind closed doors are lots of rules.”

“There was, I would say… Twitter was probably about 90% Anti-Trump, maybe 99% Anti-Trump.”

At a San Francisco bar on January 5th, Pranay Singh details how the shadow-banning algorithms targeting right-leaning are engineered:

“Yeah you look for Trump, or America, and you have like five thousand keywords to describe a redneck. Then you look and parse all the messages, all the pictures, and then you look for stuff that matches that stuff.”

When asked if the majority of the algorithms are targeted against conservative or liberal users of Twitter, Singh said, “I would say majority of it are for Republicans.”

Project Veritas founder James O’Keefe believes the power over speech Silicon Valley tech giants has is unprecedented and dangerous:

“What kind of world do we live in where computer engineers are the gatekeepers of the ‘way people talk?’ This investigation brings forth information of profound public importance that educates people about how free they really are to express their views online.”

Project Veritas plans to release more undercover video from within Twitter in the coming days.

AA- Serious flaw in WPA2

posted Oct 16, 2017, 11:12 AM by WECB640   [ updated Oct 16, 2017, 11:13 AM ]

Serious flaw in WPA2 protocol lets attackers 

intercept passwords and much more

KRACK attack is especially bad news for Android and Linux users.

                                          - OCT 16, 2017 4:37 AM UTC

Aurich Lawson / Thinkstock

Researchers have disclosed a serious weakness in the WPA2 protocol that allows attackers within range of vulnerable device or access point to intercept passwords, e-mails, and other data presumed to be encrypted, and in some cases, to inject ransomware or other malicious content into a website a client is visiting.

The proof-of-concept exploit is called KRACK, short for Key Reinstallation Attacks. The research has been a closely guarded secret for weeks ahead of a coordinated disclosure that was scheduled for 8am Monday, East Coast time. A website disclosing the vulnerability said it affects the core WPA2 protocol itself and is effective against devices running Android, Linux, and OpenBSD, and to a lesser extent macOS and Windows, as well as MediaTek Linksys, and other types of devices. The site warned that attackers can exploit the flaw to decrypt a wealth of sensitive data that's normally encrypted by the nearly ubiquitous Wi-Fi encryption protocol.

"This can be abused to steal sensitive information such as credit card numbers, passwords, chat messages, emails, photos, and so on," researcher Mathy Vanhoef, of the Katholieke Universiteit Leuven in Belgium wrote. "The attack works against all modern protected Wi-Fi networks. Depending on the network configuration, it is also possible to inject and manipulate data. For example, an attacker might be able to inject ransomware or other malware into websites."

Vanhoef provided the following video showing the attack against a device running Google's Android mobile operating system:

KRACK Attacks: Bypassing WPA2 against Android and Linux

It shows the attacker decrypting all data the phone sends to the access point. The attack works by forcing the phone into reinstalling an all-zero encryption key, rather than the real key. This ability, which also works on Linux, makes the attack particularly effective on these platforms.

The site went on to warn that visiting only HTTPS-protected Web pages wasn't automatically a remedy against the attack, since many improperly configured sites can be forced into dropping encrypted HTTPS traffic and instead transmitting unencrypted HTTP data. In the video demonstration, the attacker uses a script known as SSLstrip to force the site to downgrade a connection to HTTP. The attacker is then able to steal an account password when the Android device logs in.

"Although websites or apps may use HTTPS as an additional layer of protection, we warn that this extra protection can (still) be bypassed in a worrying number of situations," the researchers explained. "For example, HTTPS was previously bypassed in non-browser software, in Apple's iOS and OS X, in Android apps, in Android apps again, in banking apps, and even in VPN apps."

The researcher went on to say that the weakness allows attackers to target both vulnerable access points as well as vulnerable computers, smartphones and other types of connecting clients, albeit with differing levels of difficulty and effectiveness. Neither Windows nor iOS are believed to be vulnerable to the most severe attacks. Linux and Android appear to be more susceptible, because attackers can force network decryption on clients in seconds with little effort.

Vanhoef said clients can be patched to prevent attacks even when connected to vulnerable access points. Linux patches have been developed, but it's not immediately clear when they will become available for various distributions and for Android users. Patches are also available for some but not all Wi-Fi access points.

In response to a FAQ item asking if the vulnerability signaled the need for a WPA3 standard, Vanhoef wrote:

No, luckily [WPA2] implementations can be patched in a backwards-compatible manner. This means a patched client can still communicate with an unpatched access point, and vice versa. In other words, a patched client or access points sends exactly the same handshake messages as before, and at exactly the same moments in time. However, the security updates will assure a key is only installed once, preventing our attacks. So again, update all your devices once security updates are available.

KRACK works by targeting the four-way handshakethat's executed when a client joins a WPA2-protected Wi-Fi network. Among other things, the handshake helps to confirm that both the client and access points have the correct credentials. KRACK tricks the vulnerable client into reinstalling an already-in-use key. The reinstallation forces the client to reset packet numbers containing a cryptographic nonce and other parameters to their initial values. KRACK forces the nonce reuse in a way that allows the encryption to be bypassed. Ars Technica IT editor Sean Gallagher has much more about KRACK here.

Monday's disclosure follows an advisory the US CERT recently distributed to about 100 organizations described the research this way:

US-CERT has become aware of several key management vulnerabilities in the 4-way handshake of the Wi-Fi Protected Access II (WPA2) security protocol. The impact of exploiting these vulnerabilities includes decryption, packet replay, TCP connection hijacking, HTTP content injection, and others. Note that as protocol-level issues, most or all correct implementations of the standard will be affected. The CERT/CC and the reporting researcher KU Leuven, will be publicly disclosing these vulnerabilities on 16 October 2017.

According to a researcher who has been briefed on the vulnerability, it works by exploiting a four-way handshake that's used to establish a key for encrypting traffic. During the third step, the key can be resent multiple times. When it's resent in certain ways, a cryptographic nonce can be reused in a way that completely undermines the encryption.

Although kept secret for weeks, KRACK came to light on Sunday when people discovered a Github page belonging to one of the researchers and a separate website disclosing the vulnerability used the following tags:

  • WPA2
  • key reinstallation
  • security protocols
  • network security, attacks
  • nonce reuse
  • handshake
  • packet number
  • initialization vector


Researchers briefed on the vulnerabilities said they are indexed as: CVE-2017-13077, CVE-2017-13078, CVE-2017-13079, CVE-2017-13080, CVE-2017-13081, CVE-2017-13082, CVE-2017-13084, CVE-2017-13086, CVE-2017-13087, CVE-2017-13088. One researcher told Ars that Aruba and Ubiquiti, which sell wireless access points to large corporations and government organizations, already have updates available to patch or mitigate the vulnerabilities.

The vulnerabilities are scheduled to be formally presented in a talk titled Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2 scheduled for November 1 at the ACM Conference on Computer and Communications Security in Dallas. It's believed that Monday's disclosure will be made through the site The researchers presenting the talk are Mathy Vanhoef and Frank Piessens of KU Leuven. The researchers presented this related research in August at the Black Hat Security Conference in Las Vegas.

The vulnerability is likely to pose the biggest threat to large corporate and government Wi-Fi networks, particularly if they accept connections from Linux and Android devices. And once again, attackers must be within Wi-Fi range of a vulnerable access point or client to pull off the attacks. Home Wi-Fi users are vulnerable, too, again especially if they connect with Linux or Android devices, but there are likely easier ways they can be attacked. Researcher and Errata Security CEO Rob Graham has useful information and analysis here.

If possible, people with vulnerable access points and clients should avoid using Wi-Fi until patches are available and instead use wired connections. When Wi-Fi is the only connection option, people should use HTTPS, STARTTLS, Secure Shell, and other reliable protocols to encrypt Web and e-mail traffic as it passes between computers and access points. As a fall-back users should consider using a virtual private network as an added safety measure, but users are reminded to choose their VPN providers carefully, since many services can't be trusted to make users more secure.

Z - CIA Hacking Tools Revealed

posted Mar 7, 2017, 2:27 PM by WECB640

Vault 7: CIA Hacking Tools Revealed



Press Release


Today, Tuesday 7 March 2017, WikiLeaks begins its new series of leaks on the U.S. Central Intelligence Agency. Code-named "Vault 7" by WikiLeaks, it is the largest ever publication of confidential documents on the agency.

The first full part of the series, "Year Zero", comprises 8,761 documents and files from an isolated, high-security network situated inside the CIA's Center for Cyber Intelligence in Langley, Virgina. It follows an introductory disclosure last month of CIA targeting French political parties and candidates in the lead up to the 2012 presidential election.

Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized "zero day" exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.

"Year Zero" introduces the scope and direction of the CIA's global covert hacking program, its malware arsenal and dozens of "zero day" weaponized exploits against a wide range of U.S. and European company products, include Apple's iPhone, Google's Android and Microsoft's Windows and even Samsung TVs, which are turned into covert microphones.

Since 2001 the CIA has gained political and budgetary preeminence over the U.S. National Security Agency (NSA). The CIA found itself building not just its now infamous drone fleet, but a very different type of covert, globe-spanning force — its own substantial fleet of hackers. The agency's hacking division freed it from having to disclose its often controversial operations to the NSA (its primary bureaucratic rival) in order to draw on the NSA's hacking capacities.

By the end of 2016, the CIA's hacking division, which formally falls under the agency's Center for Cyber Intelligence (CCI), had over 5000 registered users and had produced more than a thousand hacking systems, trojans, viruses, and other "weaponized" malware. Such is the scale of the CIA's undertaking that by 2016, its hackers had utilized more code than that used to run Facebook. The CIA had created, in effect, its "own NSA" with even less accountability and without publicly answering the question as to whether such a massive budgetary spend on duplicating the capacities of a rival agency could be justified.

In a statement to WikiLeaks the source details policy questions that they say urgently need to be debated in public, including whether the CIA's hacking capabilities exceed its mandated powers and the problem of public oversight of the agency. The source wishes to initiate a public debate about the security, creation, use, proliferation and democratic control of cyberweapons.

Once a single cyber 'weapon' is 'loose' it can spread around the world in seconds, to be used by rival states, cyber mafia and teenage hackers alike.

Julian Assange, WikiLeaks editor stated that "There is an extreme proliferation risk in the development of cyber 'weapons'. Comparisons can be drawn between the uncontrolled proliferation of such 'weapons', which results from the inability to contain them combined with their high market value, and the global arms trade. But the significance of "Year Zero" goes well beyond the choice between cyberwar and cyberpeace. The disclosure is also exceptional from a political, legal and forensic perspective."

Wikileaks has carefully reviewed the "Year Zero" disclosure and published substantive CIA documentation while avoiding the distribution of 'armed' cyberweapons until a consensus emerges on the technical and political nature of the CIA's program and how such 'weapons' should analyzed, disarmed and published.

Wikileaks has also decided to redact and anonymise some identifying information in "Year Zero" for in depth analysis. These redactions include ten of thousands of CIA targets and attack machines throughout Latin America, Europe and the United States. While we are aware of the imperfect results of any approach chosen, we remain committed to our publishing model and note that the quantity of published pages in "Vault 7" part one (“Year Zero”) already eclipses the total number of pages published over the first three years of the Edward Snowden NSA leaks.



CIA malware targets iPhone, Android, smart TVs

CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within CCI (Center for Cyber Intelligence), a department belonging to the CIA's DDI (Directorate for Digital Innovation). The DDI is one of the five major directorates of the CIA (see this organizational chart of the CIA for more details).

The EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA in its covert operations world-wide.

The increasing sophistication of surveillance techniques has drawn comparisons with George Orwell's 1984, but "Weeping Angel", developed by the CIA's Embedded Devices Branch (EDB), which infests smart TVs, transforming them into covert microphones, is surely its most emblematic realization.

The attack against Samsung smart TVs was developed in cooperation with the United Kingdom's MI5/BTSS. After infestation, Weeping Angel places the target TV in a 'Fake-Off' mode, so that the owner falsely believes the TV is off when it is on. In 'Fake-Off' mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.

As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations.

The CIA's Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user's geolocation, audio and text communications as well as covertly activate the phone's camera and microphone.

Despite iPhone's minority share (14.5%) of the global smart phone market in 2016, a specialized unit in the CIA's Mobile Development Branch produces malware to infest, control and exfiltrate data from iPhones and other Apple products running iOS, such as iPads. CIA's arsenal includes numerous local and remote "zero days" developed by CIA or obtained from GCHQ, NSA, FBI or purchased from cyber arms contractors such as Baitshop. The disproportionate focus on iOS may be explained by the popularity of the iPhone among social, political, diplomatic and business elites.

similar unit targets Google's Android which is used to run the majority of the world's smart phones (~85%) including Samsung, HTC and Sony. 1.15 billion Android powered phones were sold last year. "Year Zero" shows that as of 2016 the CIA had 24 "weaponized" Android "zero days" which it has developed itself and obtained from GCHQ, NSA and cyber arms contractors.

These techniques permit the CIA to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the "smart" phones that they run on and collecting audio and message traffic before encryption is applied.


CIA malware targets Windows, OSx, Linux, routers

The CIA also runs a very substantial effort to infect and control Microsoft Windows users with its malware. This includes multiple local and remote weaponized "zero days", air gap jumping viruses such as "Hammer Drill" which infects software distributed on CD/DVDs, infectors for removable media such as USBs, systems to hide data in images or in covert disk areas ( "Brutal Kangaroo") and to keep its malware infestations going.

Many of these infection efforts are pulled together by the CIA's Automated Implant Branch (AIB), which has developed several attack systems for automated infestation and control of CIA malware, such as "Assassin" and "Medusa".

Attacks against Internet infrastructure and webservers are developed by the CIA's Network Devices Branch (NDB).

The CIA has developed automated multi-platform malware attack and control systems covering Windows, Mac OS X, Solaris, Linux and more, such as EDB's "HIVE" and the related "Cutthroat" and "Swindle" tools, which are described in the examples section below.


CIA 'hoarded' vulnerabilities ("zero days")

In the wake of Edward Snowden's leaks about the NSA, the U.S. technology industry secured a commitment from the Obama administration that the executive would disclose on an ongoing basis — rather than hoard — serious vulnerabilities, exploits, bugs or "zero days" to Apple, Google, Microsoft, and other US-based manufacturers.

Serious vulnerabilities not disclosed to the manufacturers places huge swathes of the population and critical infrastructure at risk to foreign intelligence or cyber criminals who independently discover or hear rumors of the vulnerability. If the CIA can discover such vulnerabilities so can others.

The U.S. government's commitment to the Vulnerabilities Equities Process came after significant lobbying by US technology companies, who risk losing their share of the global market over real and perceived hidden vulnerabilities. The government stated that it would disclose all pervasive vulnerabilities discovered after 2010 on an ongoing basis.

"Year Zero" documents show that the CIA breached the Obama administration's commitments. Many of the vulnerabilities used in the CIA's cyber arsenal are pervasive and some may already have been found by rival intelligence agencies or cyber criminals.

As an example, specific CIA malware revealed in "Year Zero" is able to penetrate, infest and control both the Android phone and iPhone software that runs or has run presidential Twitter accounts. The CIA attacks this software by using undisclosed security vulnerabilities ("zero days") possessed by the CIA but if the CIA can hack these phones then so can everyone else who has obtained or discovered the vulnerability. As long as the CIA keeps these vulnerabilities concealed from Apple and Google (who make the phones) they will not be fixed, and the phones will remain hackable.

The same vulnerabilities exist for the population at large, including the U.S. Cabinet, Congress, top CEOs, system administrators, security officers and engineers. By hiding these security flaws from manufacturers like Apple and Google the CIA ensures that it can hack everyone &mdsh; at the expense of leaving everyone hackable.


'Cyberwar' programs are a serious proliferation risk

Cyber 'weapons' are not possible to keep under effective control.

While nuclear proliferation has been restrained by the enormous costs and visible infrastructure involved in assembling enough fissile material to produce a critical nuclear mass, cyber 'weapons', once developed, are very hard to retain.

Cyber 'weapons' are in fact just computer programs which can be pirated like any other. Since they are entirely comprised of information they can be copied quickly with no marginal cost.

Securing such 'weapons' is particularly difficult since the same people who develop and use them have the skills to exfiltrate copies without leaving traces — sometimes by using the very same 'weapons' against the organizations that contain them. There are substantial price incentives for government hackers and consultants to obtain copies since there is a global "vulnerability market" that will pay hundreds of thousands to millions of dollars for copies of such 'weapons'. Similarly, contractors and companies who obtain such 'weapons' sometimes use them for their own purposes, obtaining advantage over their competitors in selling 'hacking' services.

Over the last three years the United States intelligence sector, which consists of government agencies such as the CIA and NSA and their contractors, such as Booz Allan Hamilton, has been subject to unprecedented series of data exfiltrations by its own workers.

A number of intelligence community members not yet publicly named have been arrested or subject to federal criminal investigations in separate incidents.

Most visibly, on February 8, 2017 a U.S. federal grand jury indicted Harold T. Martin III with 20 counts of mishandling classified information. The Department of Justice alleged that it seized some 50,000 gigabytes of information from Harold T. Martin III that he had obtained from classified programs at NSA and CIA, including the source code for numerous hacking tools.

Once a single cyber 'weapon' is 'loose' it can spread around the world in seconds, to be used by peer states, cyber mafia and teenage hackers alike.


U.S. Consulate in Frankfurt is a covert CIA hacker base

In addition to its operations in Langley, Virginia the CIA also uses the U.S. consulate in Frankfurt as a covert base for its hackers covering Europe, the Middle East and Africa.

CIA hackers operating out of the Frankfurt consulate ( "Center for Cyber Intelligence Europe" or CCIE) are given diplomatic ("black") passports and State Department cover. The instructions for incoming CIA hackers make Germany's counter-intelligence efforts appear inconsequential: "Breeze through German Customs because you have your cover-for-action story down pat, and all they did was stamp your passport"

Your Cover Story (for this trip)
Q: Why are you here?
A: Supporting technical consultations at the Consulate.

Two earlier WikiLeaks publications give further detail on CIA approaches to customs and secondary screening procedures.

Once in Frankfurt CIA hackers can travel without further border checks to the 25 European countries that are part of the Shengen open border area — including France, Italy and Switzerland.

A number of the CIA's electronic attack methods are designed for physical proximity. These attack methods are able to penetrate high security networks that are disconnected from the internet, such as police record database. In these cases, a CIA officer, agent or allied intelligence officer acting under instructions, physically infiltrates the targeted workplace. The attacker is provided with a USB containing malware developed for the CIA for this purpose, which is inserted into the targeted computer. The attacker then infects and exfiltrates data to removable media. For example, the CIA attack system Fine Dining, provides 24 decoy applications for CIA spies to use. To witnesses, the spy appears to be running a program showing videos (e.g VLC), presenting slides (Prezi), playing a computer game (Breakout2, 2048) or even running a fake virus scanner (Kaspersky, McAfee, Sophos). But while the decoy application is on the screen, the underlaying system is automatically infected and ransacked.


How the CIA dramatically increased proliferation risks

In what is surely one of the most astounding intelligence own goals in living memory, the CIA structured its classification regime such that for the most market valuable part of "Vault 7" — the CIA's weaponized malware (implants + zero days), Listening Posts (LP), and Command and Control (C2) systems — the agency has little legal recourse.

The CIA made these systems unclassified.

Why the CIA chose to make its cyberarsenal unclassified reveals how concepts developed for military use do not easily crossover to the 'battlefield' of cyber 'war'.

To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet. If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet. Consequently the CIA has secretly made most of its cyber spying/war code unclassified. The U.S. government is not able to assert copyright either, due to restrictions in the U.S. Constitution. This means that cyber 'arms' manufactures and computer hackers can freely "pirate" these 'weapons' if they are obtained. The CIA has primarily had to rely on obfuscation to protect its malware secrets.

Conventional weapons such as missiles may be fired at the enemy (i.e into an unsecured area). Proximity to or impact with the target detonates the ordnance including its classified parts. Hence military personnel do not violate classification rules by firing ordnance with classified parts. Ordnance will likely explode. If it does not, that is not the operator's intent.

Over the last decade U.S. hacking operations have been increasingly dressed up in military jargon to tap into Department of Defense funding streams. For instance, attempted "malware injections" (commercial jargon) or "implant drops" (NSA jargon) are being called "fires" as if a weapon was being fired. However the analogy is questionable.

Unlike bullets, bombs or missiles, most CIA malware is designed to live for days or even years after it has reached its 'target'. CIA malware does not "explode on impact" but rather permanently infests its target. In order to infect target's device, copies of the malware must be placed on the target's devices, giving physical possession of the malware to the target. To exfiltrate data back to the CIA or to await further instructions the malware must communicate with CIA Command & Control (C2) systems placed on internet connected servers. But such servers are typically not approved to hold classified information, so CIA command and control systems are also made unclassified.

A successful 'attack' on a target's computer system is more like a series of complex stock maneuvers in a hostile take-over bid or the careful planting of rumors in order to gain control over an organization's leadership rather than the firing of a weapons system. If there is a military analogy to be made, the infestation of a target is perhaps akin to the execution of a whole series of military maneuvers against the target's territory including observation, infiltration, occupation and exploitation.


Evading forensics and anti-virus

A series of standards lay out CIA malware infestation patterns which are likely to assist forensic crime scene investigators as well as Apple, Microsoft, Google, Samsung, Nokia, Blackberry, Siemens and anti-virus companies attribute and defend against attacks.

"Tradecraft DO's and DON'Ts" contains CIA rules on how its malware should be written to avoid fingerprints implicating the "CIA, US government, or its witting partner companies" in "forensic review". Similar secret standards cover the use of encryption to hide CIA hacker and malware communication(pdf), describing targets & exfiltrated data (pdf) as well as executing payloads (pdf) and persisting (pdf) in the target's machines over time.

CIA hackers developed successful attacks against most well known anti-virus programs. These are documented in AV defeatsPersonal Security ProductsDetecting and defeating PSPs andPSP/Debugger/RE Avoidance. For example, Comodo was defeated by CIA malware placing itself in the Window's "Recycle Bin". While Comodo 6.x has a "Gaping Hole of DOOM".

CIA hackers discussed what the NSA's "Equation Group" hackers did wrong and how the CIA's malware makers could avoid similar exposure.




The CIA's Engineering Development Group (EDG) management system contains around 500 different projects (only some of which are documented by "Year Zero") each with their own sub-projects, malware and hacker tools.

The majority of these projects relate to tools that are used for penetration, infestation ("implanting"), control, and exfiltration.

Another branch of development focuses on the development and operation of Listening Posts (LP) and Command and Control (C2) systems used to communicate with and control CIA implants; special projects are used to target specific hardware from routers to smart TVs.

Some example projects are described below, but see the table of contents for the full list of projects described by WikiLeaks' "Year Zero".



The CIA's hand crafted hacking techniques pose a problem for the agency. Each technique it has created forms a "fingerprint" that can be used by forensic investigators to attribute multiple different attacks to the same entity.

This is analogous to finding the same distinctive knife wound on multiple separate murder victims. The unique wounding style creates suspicion that a single murderer is responsible. As soon one murder in the set is solved then the other murders also find likely attribution.

The CIA's Remote Devices Branch's UMBRAGE group collects and maintains a substantial library of attack techniques 'stolen' from malware produced in other states including the Russian Federation.

With UMBRAGE and related projects the CIA cannot only increase its total number of attack types but also misdirect attribution by leaving behind the "fingerprints" of the groups that the attack techniques were stolen from.

UMBRAGE components cover keyloggers, password collection, webcam capture, data destruction, persistence, privilege escalation, stealth, anti-virus (PSP) avoidance and survey techniques.


Fine Dining

Fine Dining comes with a standardized questionnaire i.e menu that CIA case officers fill out. The questionnaire is used by the agency's OSB (Operational Support Branch) to transform the requests of case officers into technical requirements for hacking attacks (typically "exfiltrating" information from computer systems) for specific operations. The questionnaire allows the OSB to identify how to adapt existing tools for the operation, and communicate this to CIA malware configuration staff. The OSB functions as the interface between CIA operational staff and the relevant technical support staff.

Among the list of possible targets of the collection are 'Asset', 'Liason Asset', 'System Administrator', 'Foreign Information Operations', 'Foreign Intelligence Agencies' and 'Foreign Government Entities'. Notably absent is any reference to extremists or transnational criminals. The 'Case Officer' is also asked to specify the environment of the target like the type of computer, operating system used, Internet connectivity and installed anti-virus utilities (PSPs) as well as a list of file types to be exfiltrated like Office documents, audio, video, images or custom file types. The 'menu' also asks for information if recurring access to the target is possible and how long unobserved access to the computer can be maintained. This information is used by the CIA's 'JQJIMPROVISE' software (see below) to configure a set of CIA malware suited to the specific needs of an operation.



'Improvise' is a toolset for configuration, post-processing, payload setup and execution vector selection for survey/exfiltration tools supporting all major operating systems like Windows (Bartender), MacOS (JukeBox) and Linux (DanceFloor). Its configuration utilities like Margarita allows the NOC (Network Operation Center) to customize tools based on requirements from 'Fine Dining' questionairies.



HIVE is a multi-platform CIA malware suite and its associated control software. The project provides customizable implants for Windows, Solaris, MikroTik (used in internet routers) and Linux platforms and a Listening Post (LP)/Command and Control (C2) infrastructure to communicate with these implants.

The implants are configured to communicate via HTTPS with the webserver of a cover domain; each operation utilizing these implants has a separate cover domain and the infrastructure can handle any number of cover domains.

Each cover domain resolves to an IP address that is located at a commercial VPS (Virtual Private Server) provider. The public-facing server forwards all incoming traffic via a VPN to a 'Blot' server that handles actual connection requests from clients. It is setup for optional SSL client authentication: if a client sends a valid client certificate (only implants can do that), the connection is forwarded to the 'Honeycomb' toolserver that communicates with the implant; if a valid certificate is missing (which is the case if someone tries to open the cover domain website by accident), the traffic is forwarded to a cover server that delivers an unsuspicious looking website.

The Honeycomb toolserver receives exfiltrated information from the implant; an operator can also task the implant to execute jobs on the target computer, so the toolserver acts as a C2 (command and control) server for the implant.

Similar functionality (though limited to Windows) is provided by the RickBobby project.

See the classified user and developer guides for HIVE.


Frequently Asked Questions


Why now?

WikiLeaks published as soon as its verification and analysis were ready.

In Febuary the Trump administration has issued an Executive Order calling for a "Cyberwar" review to be prepared within 30 days.

While the review increases the timeliness and relevance of the publication it did not play a role in setting the publication date.



Names, email addresses and external IP addresses have been redacted in the released pages (70,875 redactions in total) until further analysis is complete.

  1. Over-redaction: Some items may have been redacted that are not employees, contractors, targets or otherwise related to the agency, but are, for example, authors of documentation for otherwise public projects that are used by the agency.
  2. Identity vs. person: the redacted names are replaced by user IDs (numbers) to allow readers to assign multiple pages to a single author. Given the redaction process used a single person may be represented by more than one assigned identifier but no identifier refers to more than one real person.
  3. Archive attachments (zip, tar.gz, ...) are replaced with a PDF listing all the file names in the archive. As the archive content is assessed it may be made available; until then the archive is redacted.
  4. Attachments with other binary content are replaced by a hex dump of the content to prevent accidental invocation of binaries that may have been infected with weaponized CIA malware. As the content is assessed it may be made available; until then the content is redacted.
  5. The tens of thousands of routable IP addresses references (including more than 22 thousand within the United States) that correspond to possible targets, CIA covert listening post servers, intermediary and test systems, are redacted for further exclusive investigation.
  6. Binary files of non-public origin are only available as dumps to prevent accidental invocation of CIA malware infected binaries.


Organizational Chart

The organizational chart corresponds to the material published by WikiLeaks so far.

Since the organizational structure of the CIA below the level of Directorates is not public, the placement of the EDG and its branches within the org chart of the agency is reconstructed from information contained in the documents released so far. It is intended to be used as a rough outline of the internal organization; please be aware that the reconstructed org chart is incomplete and that internal reorganizations occur frequently.


Wiki pages

"Year Zero" contains 7818 web pages with 943 attachments from the internal development groupware. The software used for this purpose is called Confluence, a proprietary software from Atlassian. Webpages in this system (like in Wikipedia) have a version history that can provide interesting insights on how a document evolved over time; the 7818 documents include these page histories for 1136 latest versions.

The order of named pages within each level is determined by date (oldest first). Page content is not present if it was originally dynamically created by the Confluence software (as indicated on the re-constructed page).


What time period is covered?

The years 2013 to 2016. The sort order of the pages within each level is determined by date (oldest first).

WikiLeaks has obtained the CIA's creation/last modification date for each page but these do not yet appear for technical reasons. Usually the date can be discerned or approximated from the content and the page order. If it is critical to know the exact time/date contact WikiLeaks.


What is "Vault 7"

"Vault 7" is a substantial collection of material about CIA activities obtained by WikiLeaks.


When was each part of "Vault 7" obtained?

Part one was obtained recently and covers through 2016. Details on the other parts will be available at the time of publication.


Is each part of "Vault 7" from a different source?

Details on the other parts will be available at the time of publication.


What is the total size of "Vault 7"?

The series is the largest intelligence publication in history.


How did WikiLeaks obtain each part of "Vault 7"?

Sources trust WikiLeaks to not reveal information that might help identify them.


Isn't WikiLeaks worried that the CIA will act against its staff to stop the series?

No. That would be certainly counter-productive.


Has WikiLeaks already 'mined' all the best stories?

No. WikiLeaks has intentionally not written up hundreds of impactful stories to encourage others to find them and so create expertise in the area for subsequent parts in the series. They're there. Look. Those who demonstrate journalistic excellence may be considered for early access to future parts.


Won't other journalists find all the best stories before me?

Unlikely. There are very considerably more stories than there are journalists or academics who are in a position to write them.

Y-Friend request from yourself? Watch out for Facebook fakes

posted Feb 20, 2017, 5:24 PM by WECB640

Friend request from yourself? Watch out for Facebook fakes

Jennifer Jolly, Special for USA TodayPublished 8:17 a.m. ET Feb. 17, 2017 | Updated 7 hours ago
Beware of fake Facebook accounts

Tech columnist Jennifer Jolly explains how to detect and report fake Facebook pages. Jennifer Jolly, Special for USA Today


(Photo: Justin Sullivan, Getty Images)


A few weeks ago, I got a Facebook request from “myself.”  I recognized it right away as a common Facebook cloning scam.

The way it works is simple. Cyber-crooks snag a photo of you, usually right from your own profile page, poach any information you’ve made public, then reach out to all your real friends and family. Once anyone you actually know accepts the fake Facebook friend request or engages with them on Messenger, the scammers typically make a play for money, personal info, or even try to infect your computer or phone with malware.

When the same thing happened to my mom last year, the scammer (pretending to be my mother) hit up my cousin with a sob story asking for money. He texted me instead and I told  my cousin how to report it to Facebook.

A few hours later, another one of her friends received a message from the crook to “click this link to see a great YouTube video you’re in.” She too, smelled a skunk. Had she actually clicked the link, though, it could have infected her computer with malware or a virus, logged her passwords and given hackers the fast track to her bank account, email or store accounts.

Facebook Clone- Fake page

Facebook Clone- Fake page (Photo: Jennifer Jolly)

Spotting the fakes

So how do you know that a friend request is real vs. fake? Here are a few questions to help you figure that out.

Are they a duplicate? This is the most obvious test for any fake friend, and all you have to do is see if someone with the same name is already friends with you on Facebook. Nobody has any reason to make more than one account, so if your best friend from college is still on your friends list but just sent you another friend request, send it straight into the trash. Then report it.

Check their photos. Okay, so a hacker will probably find a few freebie photos for their profile, but if you dig into their albums their plan totally falls apart. Before you accept a shady friend request, click on their name and go to their profile page. Browse through their photos and albums and see what’s there. If it’s bare, aside from the profile picture, or has just a couple random photos with no comments or likes, you’ve just nabbed a faker.

Frisk their friends list. If someone is targeting you, their fake account is likely just a shell with very little going on. Click on their friends list and see how many they have. If it’s blank, run for the hills, but even if it’s well populated, those could all be fake or spam profiles too, so be sure to check what mutual friends you have in common. If the person isn’t friends with any of your friends, it’s almost certainly a scammer.

Facebook clone- Fake page

Facebook clone- Fake page (Photo: Jennifer Jolly)

If you do spot a fake, block, report, and warn your friends. (Facebook also cracks down pretty hard on these kind of shenanigans these days.) From the scammer's main Facebook profile page, you can click the little “more” icon (three little dots in a row) next to their profile picture and then select “Report.” A little menu pops up asking you what you want to report, so select “Report this Profile.” Once you do this, Facebook will know to look at the account and take any actions needed. After you’ve reported, click that little “more” icon again and select “Block” to remove the account from your life forever.

Reporting clone pages on Facebook.

Reporting clone pages on Facebook. (Photo: Jennifer Jolly Special for USA Today)

How to report the scam on Facebook

How to report the scam on Facebook (Photo: Jennifer Jolly Special for USA TODAY)

Leave the links behind

Even if you’re good about ditching fake friends and ignoring anonymous requests, anyone on Facebook can still send a message to your "Other" inbox. In Facebook Messenger, these pop up as “Message Requests,” and even if someone isn’t your friend, he or she can still send you nasty links and malware without much consequence.

Never ever click on any links you get in these unverified messages, and do your best to avoid interaction with anyone who sends you a chat request out of the blue, even if he or she looks like someone you know. Follow the rules above and verify before you even reply, and if you determine it's a fake, head to the scammer's profile page and block them.

Jennifer Jolly is an Emmy Award-winning consumer tech contributor and host of USA TODAY's digital video show TECH NOW. E-mail her at Follow her on Twitter @JenniferJolly.

X - Study: Facebook can actually make us more narrow-minded

posted Jan 23, 2017, 6:36 AM by WECB640

Study: Facebook can actually make us more narrow-minded

Download the PNAS Full-Text App today, now available for Apple and Android devices.

  • Go to PNAS Homepage
  • Current Issue 
  • vol. 113 no. 3 
  • Michela Del Vicario,  554559

The spreading of misinformation online



The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web is a fruitful environment for the massive diffusion of unverified rumors. In this work, using a massive quantitative analysis of Facebook, we show that information related to distinct narratives––conspiracy theories and scientific news––generates homogeneous and polarized communities (i.e., echo chambers) having similar information consumption patterns. Then, we derive a data-driven percolation model of rumor spreading that demonstrates that homogeneity and polarization are the main determinants for predicting cascades’ size.


The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web (WWW) also allows for the rapid dissemination of unsubstantiated rumors and conspiracy theories that often elicit rapid, large, but naive social responses such as the recent case of Jade Helm 15––where a simple military exercise turned out to be perceived as the beginning of a new civil war in the United States. In this work, we address the determinants governing misinformation spreading through a thorough quantitative analysis. In particular, we focus on how Facebook users consume information related to two distinct narratives: scientific and conspiracy news. We find that, although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, cascade dynamics differ. Selective exposure to content is the primary driver of content diffusion and generates the formation of homogeneous clusters, i.e., “echo chambers.” Indeed, homogeneity appears to be the primary driver for the diffusion of contents and each echo chamber has its own cascade dynamics. Finally, we introduce a data-driven percolation model mimicking rumor spreading and we show that homogeneity and polarization are the main determinants for predicting cascades’ size.

The massive diffusion of sociotechnical systems and microblogging platforms on the World Wide Web (WWW) creates a direct path from producers to consumers of content, i.e., allows disintermediation, and changes the way users become informed, debate, and form their opinions (15). This disintermediated environment can foster confusion about causation, and thus encourage speculation, rumors, and mistrust (6). In 2011 a blogger claimed that global warming was a fraud designed to diminish liberty and weaken democracy (7). Misinformation about the Ebola epidemic has caused confusion among healthcare workers (8). Jade Helm 15, a simple military exercise, was perceived on the Internet as the beginning of a new civil war in the United States (9).

Recent works (1012) have shown that increasing the exposure of users to unsubstantiated rumors increases their tendency to be credulous.

According to ref. 13, beliefs formation and revision is influenced by the way communities attempt to make sense of events or facts. Such a phenomenon is particularly evident on the WWW where users, embedded in homogeneous clusters (1416), process information through a shared system of meaning (10111718) and trigger collective framing of narratives that are often biased toward self-confirmation.

In this work, through a thorough quantitative analysis on a massive dataset, we study the determinants behind misinformation diffusion. In particular, we analyze the cascade dynamics of Facebook users when the content is related to very distinct narratives: conspiracy theories and scientific information. On the one hand, conspiracy theories simplify causation, reduce the complexity of reality, and are formulated in a way that is able to tolerate a certain level of uncertainty (1921). On the other hand, scientific information disseminates scientific advances and exhibits the process of scientific thinking. Notice that we do not focus on the quality of the information but rather on the possibility of verification. Indeed, the main difference between the two is content verifiability. The generators of scientific information and their data, methods, and outcomes are readily identifiable and available. The origins of conspiracy theories are often unknown and their content is strongly disengaged from mainstream society and sharply divergent from recommended practices (22), e.g., the belief that vaccines cause autism.

Massive digital misinformation is becoming pervasive in online social media to the extent that it has been listed by the World Economic Forum (WEF) as one of the main threats to our society (23). To counteract this trend, algorithmic-driven solutions have been proposed (2429), e.g., Google (30) is developing a trustworthiness score to rank the results of queries. Similarly, Facebook has proposed a community-driven approach where users can flag false content to correct the newsfeed algorithm. This issue is controversial, however, because it raises fears that the free circulation of content may be threatened and that the proposed algorithms may not be accurate or effective (1011,31). Often conspiracists will denounce attempts to debunk false information as acts of misinformation.

Whether a claim (either substantiated or not) is accepted by an individual is strongly influenced by social norms and by the claim’s coherence with the individual’s belief system––i.e., confirmation bias (3233). Many mechanisms animate the flow of false information that generates false beliefs in an individual, which, once adopted, are rarely corrected (3437).

In this work we provide important insights toward the understanding of cascade dynamics in online social media and in particular about misinformation spreading.

We show that content-selective exposure is the primary driver of content diffusion and generates the formation of homogeneous clusters, i.e., “echo chambers” (10113839). Indeed, our analysis reveals that two well-formed and highly segregated communities exist around conspiracy and scientific topics. We also find that although consumers of scientific information and conspiracy theories exhibit similar consumption patterns with respect to content, the cascade patterns of the two differ. Homogeneity appears to be the preferential driver for the diffusion of content, yet each echo chamber has its own cascade dynamics. To account for these features we provide an accurate data-driven percolation model of rumor spreading showing that homogeneity and polarization are the main determinants for predicting cascade size.

The paper is structured as follows. First we provide the preliminary definitions and details concerning data collection. We then provide a comparative analysis and characterize the statistical signatures of cascades of the different kinds of content. Finally, we introduce a data-driven model that replicates the analyzed cascade dynamics.


Ethics Statement.

Approval and informed consent were not needed because the data collection process has been carried out using the Facebook Graph application program interface (API) (40), which is publicly available. For the analysis (according to the specification settings of the API) we only used publicly available data (thus users with privacy restrictions are not included in the dataset). The pages from which we download data are public Facebook entities and can be accessed by anyone. User content contributing to these pages is also public unless the user’s privacy settings specify otherwise, and in that case it is not available to us.

Data Collection.

Debate about social issues continues to expand across the Web, and unprecedented social phenomena such as the massive recruitment of people around common interests, ideas, and political visions are emerging. Using the approach described in ref. 10, we define the space of our investigation with the support of diverse Facebook groups that are active in the debunking of misinformation.

The resulting dataset is composed of 67 public pages divided between 32 about conspiracy theories and 35 about science news. A second set, composed of two troll pages, is used as a benchmark to fit our data-driven model. The first category (conspiracy theories) includes the pages that disseminate alternative, controversial information, often lacking supporting evidence and frequently advancing conspiracy theories. The second category (science news) includes the pages that disseminate scientific information. The third category (trolls) includes those pages that intentionally disseminate sarcastic false information on the Web with the aim of mocking the collective credulity online.

For the three sets of pages we download all of the posts (and their respective user interactions) across a 5-y time span (2010–2014). We perform the data collection process by using the Facebook Graph API (40), which is publicly available and accessible through any personal Facebook user account. The exact breakdown of the data is presented in SI Appendix, section 1.

Preliminaries and Definitions.

A tree is an undirected simple graph that is connected and has no simple cycles. An oriented tree is a directed acyclic graph whose underlying undirected graph is a tree. A sharing tree, in the context of our research, is an oriented tree made up of the successive sharing of a news item through the Facebook system. The root of the sharing tree is the node that performs the first share. We define the size of the sharing tree as the number of nodes (and hence the number of news sharers) in the tree and the height of the sharing tree as the maximum path length from the root.

We define the user polarization σ=2ϱ1, where 0ϱ1 is the fraction of “likes” a user puts on conspiracy-related content, and hence 1σ1. From user polarization, we define the edge homogeneity, for any edge eij between nodes i and j, asσij=σiσj,

with 1σij1. Edge homogeneity reflects the similarity level between the polarization of the two sharing nodes. A link in the sharing tree is homogeneous if its edge homogeneity is positive. We then define a sharing path to be any path from the root to one of the leaves of the sharing tree. A homogeneous path is a sharing path for which the edge homogeneity of each edge is positive, i.e., a sharing path composed only of homogeneous links.

Results and Discussion

Anatomy of Cascades.

We begin our analysis by characterizing the statistical signature of cascades as they relate to information type. We analyze the three types—science news, conspiracy rumors, and trolling—and find that size and maximum degree are power-law distributed for all three categories. The maximum cascade size values are 952 for science news, 2,422 for conspiracy news, and 3,945 for trolling, and the estimated exponents γ for the power-law distributions are 2.21 for science news, 2.47 for conspiracy, and 2.44 for trolling posts. Tree height values range from 1 to 5, with a maximum height of 5 for science news and conspiracy theories and a maximum height of 4 for trolling. The resulting network is very dense. Notice that such a feature weakens the role of hubs in rumor-spreading dynamics. For further information see SI Appendix, section 2.1.

Fig. 1 shows the probability density function (PDF) of the cascade lifetime (using hours as time units) for science and conspiracy. We compute the lifetime as the length of time between the first user and the last user sharing a post. In both categories we find a first peak at ∼1–2 h and a second at ∼20 h, indicating that the temporal sharing patterns are similar irrespective of the difference in topic. We also find that a significant percentage of the information diffuses rapidly (24.42% of the science news and 20.76% of the conspiracy rumors diffuse in less than 2 h, and 39.45% of science news and 40.78% of conspiracy theories in less than 5 h). Only 26.82% of the diffusion of science news and 17.79% of conspiracy lasts more than 1 d.

Fig. 1.

PDF of lifetime computed on science news and conspiracy theories, where the lifetime is here computed as the temporal distance (in hours) between the first and last share of a post. Both categories show a similar behavior.

In Fig. 2 we show the lifetime as a function of the cascade size. For science news we have a peak in the lifetime corresponding to a cascade size value of 200, and higher cascade size values correspond to high lifetime variability. For conspiracy-related content the lifetime increases with cascade size.

Fig. 2.

Lifetime as a function of the cascade size for conspiracy news (Left) and science news (Right). Science news quickly reaches a higher diffusion; a longer lifetime does not correspond to a higher level of interest. Conspiracy rumors are assimilated more slowly and show a positive relation between lifetime and size.

These results suggest that news assimilation differs according to the categories. Science news is usually assimilated, i.e., it reaches a higher level of diffusion quickly, and a longer lifetime does not correspond to a higher level of interest. Conversely, conspiracy rumors are assimilated more slowly and show a positive relation between lifetime and size. For both science and conspiracy news, we compute the size as a function of the lifetime and confirm that differentiation in the sharing patterns is content-driven, and that for conspiracy there is a positive relation between size and lifetime (see SI Appendix, section 2.1 for further details).

Homogeneous Clusters.

We next examine the social determinants that drive sharing patterns and we focus on the role of homogeneity in friendship networks.

Fig. 3 shows the PDF of the mean-edge homogeneity, computed for all cascades of science news and conspiracy theories. It shows that the majority of links between consecutively sharing users is homogeneous. In particular, the average edge homogeneity value of the entire sharing cascade is always greater than or equal to zero, indicating that either the information transmission occurs inside homogeneous clusters in which all links are homogeneous or it occurs inside mixed neighborhoods in which the balance between homogeneous and nonhomogeneous links is favorable toward the former ones. However, the probability of close to zero mean-edge homogeneity is quite small. Contents tend to circulate only inside the echo chamber.

Fig. 3.

PDF of edge homogeneity for science (orange) and conspiracy (blue) news. Homogeneity paths are dominant on the whole cascades for both scientific and conspiracy news.

Hence, to further characterize the role of homogeneity in shaping sharing cascades, we compute cascade size as a function of mean-edge homogeneity for both science and conspiracy news (Fig. 4). In science news, higher levels of mean-edge homogeneity in the interval (0.5, 0.8) correspond to larger cascades, but in conspiracy theories lower levels of mean-edge homogeneity (0.25) correspond to larger cascades. Notice that, although viral patterns related to distinct contents differ, homogeneity is clearly the driver of information diffusion. In other words, different contents generate different echo chambers, characterized by a high level of homogeneity inside them. The PDF of the edge homogeneity, computed for science and conspiracy news as well as the two taken together—both in the unconditional case and in the conditional case (in the event that the user that made the first share in the couple has a positive or negative polarization)—confirms the roughly null probability of a negative edge homogeneity (SI Appendix, section 2.1).

Fig. 4.

Cascade size as a function of edge homogeneity for science (orange) and conspiracy (dashed blue) news.

We record the complementary cumulative distribution function (CCDF) of the number of all sharing paths* on each tree compared with the CCDF of the number of homogeneous paths for science and conspiracy news, and the two together. A Kolmogorov–Smirnov test and Q-Q plots confirm that for all three pairs of distributions considered there is no significant statistical difference (see SI Appendix, section 2.2 for more details). We confirm the pervasiveness of homogeneous paths.

Indeed, cascades’ lifetimes of science and conspiracy news exhibit a probability peak in the first 2 h, and then in the following hours they rapidly decrease. Despite the similar consumption patterns, cascade lifetime expressed as a function of the cascade size differs greatly for the different content sets. However, homogeneity remains the main driver of cascades’ propagation. The distributions of the number of total and homogeneous sharing paths are very similar for both content categories. Viral patterns related to contents belonging to different narratives differ, but homogeneity is the primary driver of content diffusion.

The Model.

Our findings show that users mostly tend to select and share content according to a specific narrative and to ignore the rest. This suggests that the determinant for the formation of echo chambers is confirmation bias. To model this mechanism we now introduce a percolation model of rumor spreading to account for homogeneity and polarization. We considern users connected by a small-world network (41) with rewiring probability r. Every node has an opinion ωii{1,n} uniformly distributed between [0,1] and is exposed to m news items with a content ϑj, j{1,m} uniformly distributed in [0,1]. At each step the news items are diffused and initially shared by a group of first sharers. After the first step, the news recursively passes to the neighborhoods of previous step sharers, e.g., those of the first sharers during the second step. If a friend of the previous step sharers has an opinion close to the fitness of the news, then she shares the news again.

When|ωiϑj|δ,user i shares news jδ is the sharing threshold.

Because δ by itself cannot capture the homogeneous clusters observed in the data, we model the connectivity pattern as a signed network (442) considering different fractions of homogeneous links and hence restricting diffusion of news only to homogeneous links. We define ϕHL as the fraction of homogeneous links in the network, M as the number of total links, and nh as the number of homogeneous links; thus, we haveϕHL=nhM, 0nhM.Notice that 0ϕHL1 and that 1ϕHL, the fraction of nonhomogeneous links, is complementary to ϕHL. In particular, we can reduce the parameters space to ϕHL[0.5,1] as we would restrict our attention to either one of the two complementary clusters.

The model can be seen as a branching process where the sharing threshold δ and neighborhood dimension z are the key parameters. More formally, let the fitness θj of the jth news and the opinion ωi of a the ith user be uniformly independent identically distributed (i.i.d.) between [0,1]. Then the probability p that a user i shares a post j is defined by a probability p=min(1,θ+δ)max(0,θδ)2δ, because θ and ω are uniformly i.i.d. In general, if ω and θ have distributions f(ω) and f(θ), then p will depend on θ,pθ=f(θ)min(1,θ+δ)max(0,θδ)f(ω)dω.

If we are on a tree of degree z (or on a sparse lattice of degree z+1), the average number of sharers (the branching ratio) is defined byμ=zp2δ z,
with a critical cascade size S=(1μ)1. If we assume that the distribution of the number m of the first sharers is f(m), then the average cascade size isS=mf(m)m(1μ)1=mf1μmf12δz,
where f=mf(m) is the average with respect to f. In the simulations we fixed neighborhood dimension z=8 because the branching ratio μ depends upon the product of zand δ and, without loss of generality, we can consider the variation of just one of them.

If we allow a probability q that a neighbor of a user has a different polarization, then the branching ratio becomes μ=z(1q)p. If a lattice has a degree distribution d(k) (k=z+1), we can then assume a usual percolation process that provides a critical branching ratio and that is linear in k2d/kd (μ(1q)pz2/z).

Simulation Results.

We explore the model parameters space using n=5,000 nodes and m=1,000 news items with the number of first sharers distributed as (i) inverse Gaussian, (ii) log normal, (iii) Poisson, (iv) uniform distribution, and as the real-data distribution (from the science and conspiracy news sample). In Table 1 we show a summary of relevant statistics (min value, first quantile, median, mean, third quantile, and max value) to compare the real-data first sharers distribution with the fitted distributions.

Table 1.

Summary of relevant statistics comparing synthetic data with the real ones

Along with the first sharers distribution, we vary the sharing threshold δ in the interval [0.01,0.05] and the fraction of homogeneous links ϕHL in the interval [0.5,1]. To avoid biases induced by statistical fluctuations in the stochastic process, each point of the parameter space is averaged over 100 iterations. ϕHL0.5 provides a good estimate of real-data values. In particular, consistently with the division of in two echo chambers (science and conspiracy), the network is divided into two clusters in which news items remain inside and are transmitted solely within each community’s echo chamber (see SI Appendix, section 3.2 for the details of the simulation results).

In addition to the science and conspiracy content sharing trees, we downloaded a set of 1,072 sharing trees of intentionally false information from troll pages. Frequently troll information, e.g., parodies of conspiracy theories such as chem-trails containing the active principle of Viagra, is picked up and shared by habitual conspiracy theory consumers. We computed the mean and SD of size and height of all trolling sharing trees, and reproduced the data using our model. We used fixed parameters from trolling messages sample (the number of nodes in the system and the number of news items) and varied the fraction of homogeneous links ϕHL, the rewiring probability r, and sharing threshold δ. See SI Appendix, section 3.2 for the distribution of first sharers used and for additional simulation results of the fit on trolling messages.

We simulated the model dynamics with the best combination of parameters obtained from the simulations and the number of first sharers distributed as an inverse Gaussian. Fig. 5shows the CCDF of cascades’ size and the cumulative distribution function (CDF) of their height. A summary of relevant statistics (min value, first quantile, median, mean, third quantile, and max value) to compare the real-data size and height distributions with the fitted ones is reported in SI Appendix, section 3.2.

Fig. 5.

CCDF of size (Left) and CDF of height (Right) for the best parameters combination that fits real-data values,(ϕHL,r,δ)=(0.56,0.01,0.015), and first sharers distributed as IG(18.73,9.63).

We find that the inverse Gaussian is the distribution that best fits the data both for science and conspiracy news, and for troll messages. For this reason, we performed one more simulation using the inverse Gaussian as distribution of the number of first sharers, 1,072 news items, 16,889 users, and the best parameters combination obtained in the simulations.§ The CCDF of size and the CDF of height for the above parameters combination, as well as basic statistics considered, fit real data well.


Digital misinformation has become so pervasive in online social media that it has been listed by the WEF as one of the main threats to human society. Whether a news item, either substantiated or not, is accepted as true by a user may be strongly affected by social norms or by how much it coheres with the user’s system of beliefs (3233). Many mechanisms cause false information to gain acceptance, which in turn generate false beliefs that, once adopted by an individual, are highly resistant to correction (3437). In this work, using extensive quantitative analysis and data-driven modeling, we provide important insights toward the understanding of the mechanism behind rumor spreading. Our findings show that users mostly tend to select and share content related to a specific narrative and to ignore the rest. In particular, we show that social homogeneity is the primary driver of content diffusion, and one frequent result is the formation of homogeneous, polarized clusters. Most of the times the information is taken by a friend having the same profile (polarization)––i.e., belonging to the same echo chamber.

We also find that although consumers of science news and conspiracy theories show similar consumption patterns with respect to content, their cascades differ.

Our analysis shows that for science and conspiracy news a cascade’s lifetime has a probability peak in the first 2 h, followed by a rapid decrease. Although the consumption patterns are similar, cascade lifetime as a function of the size differs greatly.

These results suggest that news assimilation differs according to the categories. Science news is usually assimilated, i.e., it reaches a higher level of diffusion, quickly, and a longer lifetime does not correspond to a higher level of interest. Conversely, conspiracy rumors are assimilated more slowly and show a positive relation between lifetime and size.

The PDF of the mean-edge homogeneity indicates that homogeneity is present in the linking step of sharing cascades. The distributions of the number of total sharing paths and homogeneous sharing paths are similar in both content categories.

Viral patterns related to distinct contents are different but homogeneity drives content diffusion. To mimic these dynamics, we introduce a simple data-driven percolation model of signed networks, i.e., networks composed of signed edges accounting for nodes preferences toward specific contents. Our model reproduces the observed dynamics with high accuracy.

Users tend to aggregate in communities of interest, which causes reinforcement and fosters confirmation bias, segregation, and polarization. This comes at the expense of the quality of the information and leads to proliferation of biased narratives fomented by unsubstantiated rumors, mistrust, and paranoia.

According to these settings algorithmic solutions do not seem to be the best options in breaking such a symmetry. Next envisioned steps of our research are to study efficient communication strategies accounting for social and cognitive determinants behind massive digital misinformation.


Special thanks go to Delia Mocanu, “Protesi di Protesi di Complotto,” “Che vuol dire reale,” “La menzogna diventa verita e passa alla storia,” “Simply Humans,” “Semplicemente me,” Salvatore Previti, Elio Gabalo, Sandro Forgione, Francesco Pertini, and “The rooster on the trash” for their valuable suggestions and discussions. Funding for this work was provided by the EU FET Project MULTIPLEX, 317532, SIMPOL, 610704, the FET Project DOLFINS 640772, SoBigData 654024, and CoeGSS 676547.


  • Author contributions: M.D.V., A.B., F.Z., A.S., G.C., H.E.S., and W.Q. designed research; M.D.V., A.B., F.Z., H.E.S., and W.Q. performed research; M.D.V., A.B., F.Z., F.P., and W.Q. contributed new reagents/analytic tools; M.D.V., A.B., F.Z., A.S., G.C., H.E.S., and W.Q. analyzed data; and M.D.V., A.B., F.Z., A.S., G.C., H.E.S., and W.Q. wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission. M.P. is a guest editor invited by the Editorial Board.

  • *Recall that a sharing path is here defined as any path from the root to one of the leaves of the sharing tree. A homogeneous path is a sharing path for which the edge homogeneity of each edge is positive.

  • For details on the parameters of the fitted distributions used, see SI Appendix, section 3.2.

  • Note that the real-data values for the mean (and SD) of size and height on the troll posts are, respectively, 23.54 (122.32) and 1.78 (0.73).

  • §The best parameters combinations is ϕHL=0.56, r=0.01, δ=0.015. In this case we have a mean size equal to 23.42 (33.43) and a mean height 1.28 (0.88), and it is indeed a good approximation; see SI Appendix, section 3.2.

  • This article contains supporting information online at

Freely available online through the PNAS open access option.


    1. Brown J
    2. Broderick AJ
    3. Lee N
    (2007Word of mouth communication within online communities: Conceptualizing the online social networkJ Interact Market 21(3):220
    1. Kahn R
    2. Kellner D
    (2004New media and internet activism: From the “battle of Seattle” to bloggingNew Media Soc 6(1):8795
    1. Quattrociocchi W
    2. Conte R
    3. Lodi E
    (2011Opinions manipulation: Media, power and gossipAdv Complex Syst 14(4):567586
    1. Quattrociocchi W
    2. Caldarelli G
    3. Scala A
    (2014Opinion dynamics on interacting networks: Media competition and social influenceSci Rep 4:4938
    1. Kumar R
    2. Mahdian M
    3. McGlohon M
    (2010Dynamics of conversationsProceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACMNew York), pp 553562
    1. Sunstein C
    2. Vermeule A
    (2009Conspiracy theories: Causes and curesJ Polit Philos 17(2):202227
    1. Kadlec C
    (2011The goal is power: The global warming conspiracy. Forbes, July 25, 2011. Available at Accessed August 21, 2015
    1. Millman J
    (2014The inevitable rise of Ebola conspiracy theories. The Washington Post, Oct. 13, 2014. Available at Accessed August 31, 2015
    1. Lamothe D
    (2015Remember Jade Helm 15, the controversial military exercise? It’s over. The Washington Post, Sept. 14, 2015. Available at Accessed September 20, 2015
    1. Bessi Aet al.
    (2015Science vs conspiracy: Collective narratives in the age of misinformationPLoS One 10(2):e0118093
    1. Mocanu D
    2. Rossi L
    3. Zhang Q
    4. Karsai M
    5. Quattrociocchi W
    (2015Collective attention in the age of (mis) information. Comput Human Behav 51:1198–1204
    1. Bessi A
    2. Scala A
    3. Rossi L
    4. Zhang Q
    5. Quattrociocchi W
    (2014The economy of attention in the age of (mis) informationJ Trust Manage 1(1):113
    1. Furedi F
    (2006Culture of Fear Revisited (BloomsburyLondon)
    1. Aiello LMet al.
    (2012Friendship prediction and homophily in social mediaACM Trans Web 6(2):9
    1. Gu B
    2. Konana P
    3. Raghunathan R
    4. Chen HM
    (2014Research note––the allure of homophily in social media: Evidence from investor responses on virtual communitiesInf Syst Res 25(3):604617
    1. Bessi Aet al.
    (2015Viral misinformation: The role of homophily and polarizationProceedings of the 24th International Conference on World Wide Web Companion(International World Wide Web Conferences Steering CommitteeFlorence, Italy), pp 355356
    1. Bessi Aet al.
    (2015Trend of narratives in the age of misinformationPLoS One 10(8):e0134641
    1. Zollo Fet al.
    (2015Emotional dynamics in the age of misinformationPLoS One 10(9):e0138740
    1. Byford J
    (2011Conspiracy Theories: A Critical Introduction (Palgrave MacmillanLondon)
    Fine GA, Campion-Vincent V, Heath C (2005) Rumor Mills: The Social Impact of Rumor and Legend, eds Fine GA, Campion-Vincent V, Heath C (Aldine Transaction, New Brunswick, NJ), pp 103–122
    1. Hogg MA
    2. Blaylock DL
    (2011Extremism and the Psychology of Uncertainty (John Wiley & SonsChichester, UK), Vol 8
    1. Betsch C
    2. Sachse K
    (2013Debunking vaccination myths: Strong risk negations can increase perceived vaccination risksHealth Psychol 32(2):146155
    1. Howell L
    (2013Digital wildfires in a hyperconnected world. WEF Report 2013. Available at Accessed August 31, 2015
    1. Qazvinian V
    2. Rosengren E
    3. Radev DR
    4. Mei Q
    (2011Rumor has it: Identifying misinformation in microblogsProceedings of the Conference on Empirical Methods in Natural Language Processing (Association for Computational LinguisticsStroudsburg, PA), pp 15891599
    1. Ciampaglia GLet al.
    (2015Computational fact checking from knowledge networks. arXiv:1501.03471
    1. Resnick P
    2. Carton S
    3. Park S
    4. Shen Y
    5. Zeffer N
    (2014Rumorlens: A system for analyzing the impact of rumors and corrections in social media. Proceedings of Computational Journalism Conference (ACM, New York)
    1. Gupta A
    2. Kumaraguru P
    3. Castillo C
    4. Meier P
    (2014Tweetcred: Real-time credibility assessment of content on twitterSocial Informatics (SpringerBerlin), pp 228243
    1. Al Mansour AA
    2. Brankovic L
    3. Iliopoulos CS
    (2014A model for recalibrating credibility in different contexts and languages-a twitter case studyInt J Digital Inf Wireless Commun 4(1):5362
    1. Ratkiewicz Jet al.
    (2011Detecting and tracking political abuse in social media. Proceedings of the 5th International AAAI Conference on Weblogs and Social Media(AAAI, Palo Alto, CA)
    1. Dong XLet al.
    (2015Knowledge-based trust: Estimating the trustworthiness of web sourcesProc VLDB Endowment 8(9):938949
    1. Nyhan B
    2. Reifler J
    3. Richey S
    4. Freed GL
    (2014Effective messages in vaccine promotion: A randomized trialPediatrics 133(4):e835e842
    1. Zhu Bet al.
    (2010Individual differences in false memory from misinformation: Personality characteristics and their interactions with cognitive abilitiesPers Individ Dif48(8):889894
    1. Frenda SJ
    2. Nichols RM
    3. Loftus EF
    (2011Current issues and advances in misinformation researchCurr Dir Psychol Sci 20(1):2023
    1. Kelly GR
    2. Weeks BE
    (2013The promise and peril of real-time corrections to political misperceptionsProceedings of the 2013 Conference on Computer Supported Cooperative Work (ACMNew York), pp 10471058
    1. Meade ML
    2. Roediger HL 3rd
    (2002Explorations in the social contagion of memoryMem Cognit 30(7):9951009
    1. Koriat A
    2. Goldsmith M
    3. Pansky A
    (2000Toward a psychology of memory accuracyAnnu Rev Psychol 51(1):481537
    1. Ayers MS
    2. Reder LM
    (1998A theoretical review of the misinformation effect: Predictions from an activation-based memory modelPsychon Bull Rev 5(1):121
    1. Sunstein C
    (2001Echo Chambers (Princeton Univ PressPrinceton, NJ)
    1. Kelly GR
    (2009Echo chambers online?: Politically motivated selective exposure among internet news usersJ Comput Mediat Commun 14(2):265285
    Facebook. (2015) Using the graph API. Available at Accessed December 19, 2015
    1. Watts DJ
    2. Strogatz SH
    (1998Collective dynamics of ‘small-world’ networksNature 393(6684):440442
    1. Leskovec J
    2. Huttenlocher D
    3. Kleinberg J
    (2010Signed networks in social mediaProceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM,New York), pp 13611370

    We recommend

    W - Homeland Security report on 'Avalanche' cybercrime ring

    posted Dec 2, 2016, 3:01 PM by WECB640

    U.S. Flag Official website of the Department of Homeland Security
    U.S. Department of Homeland Security Seal. United States Computer Emergency Readiness Team US-CERT

    Alert (TA16-336A)

    Avalanche (crimeware-as-a-service infrastructure)

    Original release date: December 01, 2016 | Last revised: December 02, 2016

    Systems Affected

    Microsoft Windows


    “Avalanche” refers to a large global network hosting infrastructure used by cyber criminals to conduct phishing and malware distribution campaigns and money mule schemes. The United States Department of Homeland Security (DHS), in collaboration with the Federal Bureau of Investigation (FBI), is releasing this Technical Alert to provide further information about Avalanche.


    Cyber criminals utilized Avalanche botnet infrastructure to host and distribute a variety of malware variants to victims, including the targeting of over 40 major financial institutions. Victims may have had their sensitive personal information stolen (e.g., user account credentials). Victims’ compromised systems may also have been used to conduct other malicious activity, such as launching denial-of-service (DoS) attacks or distributing malware variants to other victims’ computers.

    In addition, Avalanche infrastructure was used to run money mule schemes where criminals recruited people to commit fraud involving transporting and laundering stolen money or merchandise.

    Avalanche used fast-flux DNS, a technique to hide the criminal servers, behind a constantly changing network of compromised systems acting as proxies.

    The following malware families were hosted on the infrastructure:

    • Windows-encryption Trojan horse (WVT) (aka Matsnu, Injector,Rannoh,Ransomlock.P)
    • URLzone (aka Bebloh)
    • Citadel
    • VM-ZeuS (aka KINS)
    • Bugat (aka Feodo, Geodo, Cridex, Dridex, Emotet)
    • newGOZ (aka GameOverZeuS)
    • Tinba (aka TinyBanker)
    • Nymaim/GozNym
    • Vawtrak (aka Neverquest)
    • Marcher
    • Pandabanker
    • Ranbyus
    • Smart App
    • TeslaCrypt
    • Trusteer App
    • Xswkit

    Avalanche was also used as a fast flux botnet which provides communication infrastructure for other botnets, including the following:        

    • TeslaCrypt
    • Nymaim
    • Corebot
    • GetTiny
    • Matsnu
    • Rovnix
    • Urlzone
    • QakBot (aka Qbot, PinkSlip Bot)


    A system infected with Avalanche-associated malware may be subject to malicious activity including the theft of user credentials and other sensitive data, such as banking and credit card information. Some of the malware had the capability to encrypt user files and demand a ransom be paid by the victim to regain access to those files. In addition, the malware may have allowed criminals unauthorized remote access to the infected computer. Infected systems could have been used to conduct distributed denial-of-service (DDoS) attacks.


    Users are advised to take the following actions to remediate malware infections associated with Avalanche:

    • Use and maintain anti-virus software – Anti-virus software recognizes and protects your computer against most known viruses. Even though parts of Avalanche are designed to evade detection, security companies are continuously updating their software to counter these advanced threats. Therefore, it is important to keep your anti-virus software up-to-date. If you suspect you may be a victim of an Avalanche malware, update your anti-virus software definitions and run a full-system scan. (See Understanding Anti-Virus Software for more information.)
    • Avoid clicking links in email – Attackers have become very skilled at making phishing emails look legitimate. Users should ensure the link is legitimate by typing the link into a new browser (see Avoiding Social Engineering and Phishing Attacks for more information).
    • Change your passwords – Your original passwords may have been compromised during the infection, so you should change them. (See Choosing and Protecting Passwords for more information.)
    • Keep your operating system and application software up-to-date – Install software patches so that attackers cannot take advantage of known problems or vulnerabilities. You should enable automatic updates of the operating system if this option is available. (See Understanding Patches for more information.)
    • Use anti-malware tools – Using a legitimate program that identifies and removes malware can help eliminate an infection. Users can consider employing a remediation tool. A non-exhaustive list of examples is provided below. The U.S. Government does not endorse or support any particular product or vendor.

              ESET Online Scanner

     is external)  


     is external)

              McAfee Stinger

     is external)

              Microsoft Safety Scanner

     is external)

              Norton Power Eraser

     is external)

             Trend Micro HouseCall

     is external)


    • December 1, 2016: Initial release
    • December 2, 2016: Added TrendMicro Scanner

    This product is provided subject to this Notification and this Privacy & Use policy.

    V - Internet Attack Spreads, Disrupting Major Websites

    posted Oct 21, 2016, 6:20 PM by WECB640

      Internet Attack Spreads, Disrupting Major Websites
    A map of the areas experiencing problems, as of Friday afternoon, according to

    SAN FRANCISCO — Major websites were inaccessible to people across wide swaths of the United States on Friday after a company that manages crucial parts of the internet’s infrastructure said it was under attack.

    Users reported sporadic problems reaching several websites, includingTwitterNetflix, Spotify, Airbnb, Reddit, Etsy, SoundCloud and The New York Times.

    The company, Dyn, whose servers monitor and reroute internet traffic, said it began experiencing what security experts called a distributed denial-of-service attack just after 7 a.m. Reports that many sites were inaccessible started on the East Coast, but spread westward in three waves as the day wore on and into the evening.

    And in a troubling development, the attack appears to have relied on hundreds of thousands of internet-connected devices like cameras, baby monitors and home routers that have been infected — without their owners’ knowledge — with software that allows hackers to command them to flood a target with overwhelming traffic.

    Continue reading the main story

    A spokeswoman said the Federal Bureau of Investigation and the Department of Homeland Security were looking into the incident and all potential causes, including criminal activity and a nation-state attack.

    Kyle York, Dyn’s chief strategist, said his company and others that host the core parts of the internet’s infrastructure were targets for a growing number of more powerful attacks.

    “The number and types of attacks, the duration of attacks and the complexity of these attacks are all on the rise,” Mr. York said.

    Security researchers have long warned that the increasing number of devices being hooked up to the internet, the so-called Internet of Things, would present an enormous security issue. And the assault on Friday, security researchers say, is only a glimpse of how those devices can be used for online attacks.

    Dyn, based in Manchester, N.H., said it had fended off the assault by 9:30 a.m. But by 11:52 a.m., Dyn said it was again under attack. After fending off the second wave of attacks, Dyn said at 5 p.m. that it was again facing a flood of traffic.

    A global event is affecting an upstream DNS provider. GitHub services may be intermittently available at this time.

    — GitHub Status (@githubstatus) Oct. 21, 2016

    A distributed denial-of-service attack, or DDoS, occurs when hackers flood the servers that run a target’s site with internet traffic until it stumbles or collapses under the load. Such attacks are common, but there is evidence that they are becoming more powerful, more sophisticated and increasingly aimed at core internet infrastructure providers.

    Going after companies like Dyn can cause far more damage than aiming at a single website.

    Dyn is one of many outfits that host the Domain Name System, or DNS, which functions as a switchboard for the internet. The DNS translates user-friendly web addresses like into numerical addresses that allow computers to speak to one another. Without the DNS servers operated by internet service providers, the internet could not operate.

    In this case, the attack was aimed at the Dyn infrastructure that supports internet connections. While the attack did not affect the websites themselves, it blocked or slowed users trying to gain access to those sites.

    Anyone else having a whole lot of trouble with sites loading properly this morning? Paypal is down, Twitter was down, Netflix half loading.

    — Emmy Caitlin (@emmycaitlin) Oct. 21, 2016

    Mr. York, the Dyn strategist, said in an interview during a lull in the attacks that the assaults on its servers were complex.

    “This was not your everyday DDoS attack,” Mr. York said. “The nature and source of the attack is still under investigation.”

    A notice from Dyn on its website about the outage.

    Later in the day, Dave Allen, the general counsel at Dyn, said tens of millions of internet addresses, or so-called I.P. addresses, were being used to send a fire hose of internet traffic at the company’s servers. He confirmed that a large portion of that traffic was coming from internet-connected devices that had been co-opted by type of malware, called Mirai.

    Dale Drew, chief security officer at Level 3, an internet service provider, found evidence that roughly 10 percent of all devices co-opted by Mirai were being used to attack Dyn’s servers. Just one week ago, Level 3 found that 493,000 devices had been infected with Mirai malware, nearly double the number infected last month.

    Mr. Allen added that Dyn was collaborating with law enforcement and other internet service providers to deal with the attacks.

    In a recent report, Verisign, a registrar for many internet sites that has a unique perspective into this type of attack activity, reported a 75 percent increase in such attacks from April through June of this year, compared with the same period last year.

    The attacks were not only more frequent, they were bigger and more sophisticated. The typical attack more than doubled in size. What is more, the attackers were simultaneously using different methods to attack the company’s servers, making them harder to stop.

    The most frequent targets were businesses that provide internet infrastructure services like Dyn.

    “DNS has often been neglected in terms of its security and availability,” Richard Meeus, vice president for technology at Nsfocus, a network security firm, wrote in an email. “It is treated as if it will always be there in the same way that water comes out of the tap.”

    Last month, Bruce Schneier, a security expert and blogger, wrote on theLawfare blog that someone had been probing the defenses of companies that run crucial pieces of the internet.

    “These probes take the form of precisely calibrated attacks designed to determine exactly how well the companies can defend themselves, and what would be required to take them down,” Mr. Schneier wrote. “We don’t know who is doing this, but it feels like a large nation-state. China and Russia would be my first guesses.”

    It is too early to determine who was behind Friday’s attacks, but it is this type of attack that has election officials concerned. They are worried that an attack could keep citizens from submitting votes.

    Thirty-one states and the District of Columbia allow internet voting for overseas military and civilians. Alaska allows any Alaskan citizen to do so. Barbara Simons, the co-author of the book “Broken Ballots: Will Your Vote Count?” and a member of the board of advisers to the Election Assistance Commission, the federal body that oversees voting technology standards, said she had been losing sleep over just this prospect.

    “A DDoS attack could certainly impact these votes and make a big difference in swing states,” Dr. Simons said on Friday. “This is a strong argument for why we should not allow voters to send their voted ballots over the internet.”

    This month the director of national intelligence, James Clapper, and the Department of Homeland Security accused Russia of hacking the Democratic National Committee, apparently in an effort to affect the presidential election. There has been speculation about whether President Obama has ordered the National Security Agency to conduct a retaliatoryattack and the potential backlash this might cause from Russia.

    Gillian M. Christensen, deputy press secretary for the Department of Homeland Security, said the agency was investigating “all potential causes” of the attack.

    Vice President Joseph R. Biden Jr. said on the NBC News program “Meet the Press this month that the United States was prepared to respond to Russia’s election attacks in kind. “We’re sending a message,” Mr. Biden said. “We have the capacity to do it.”

    But technology providers in the United States could suffer blowback. As Dyn fell under recurring attacks on Friday, Mr. York, the chief strategist, said such assaults were the reason so many companies are pushing at least parts of their infrastructure to cloud computing networks, to decentralize their systems and make them harder to attack.

    “It’s a total wild, wild west out there,” Mr. York said.

    Erin McCann contributed reporting from New York.nue reading the main story

    • n

    U - Yahoo says 500 million accounts stolen

    posted Sep 22, 2016, 5:12 PM by WECB640

    Yahoo (YHOOTech30) confirmed on Thursday data "associated with at least 500 million user accounts" have been stolen in what may be one of the largest cybersecurity breaches ever.

    The company said it believes a "state-sponsored actor" was behind the data breach, meaning an individual acting on behalf of a government. The breach is said to have occurred in late 2014.

    "The account information may have included names, email addresses, telephone numbers, dates of birth, hashed passwords (the vast majority with bcrypt) and, in some cases, encrypted or unencrypted security questions and answers," Yahoo said in a statement.

    Yahoo urges users to change their password and security questions and to review their accounts for suspicious activity.

    The silver lining for users -- if there is one -- is that sensitive financial data like bank account numbers and credit card data are not believed to be included in the stolen information, according to Yahoo.

    Related: What to do if your Yahoo account was hacked

    Yahoo is working with law enforcement to learn more about the breach.

    "The FBI is aware of the intrusion and investigating the matter," an FBI spokesperson said. "We take these types of breaches very seriously and will determine how this occurred and who is responsible. We will continue to work with the private sector and share information so they can safeguard their systems against the actions of persistent cyber criminals."

    A large-scale data breach was first rumored in August when a hacker who goes by the name of "Peace" claimed to be selling data from 200 million Yahoo users online. The same hacker has previously claimed to sell stolen accounts from LinkedIn (LNKDTech30) and MySpace.

    Yahoo originally said it was "aware of a claim" and was investigating the situation. Nearly two months later, it turns out the situation is even worse.

    "This is massive," said cybersecurity expert Per Thorsheim on the scale of the hack. "It will cause ripples online for years to come."

    The data breach comes at a sensitive time for Yahoo.

    Verizon (VZTech30) agreed to buy Yahoo's core properties for $4.83 billion in late July, just days before the hack was first reported. The deal is expected to close in the first quarter of 2017.

    Verizon says it only learned of the breach this week.

    "Within the last two days, we were notified of Yahoo's security incident," a spokesperson for Verizon said in a statement provided to CNNMoney.

    We understand Yahoo is conducting an active investigation of this matter, but we otherwise have limited information and understanding of the impact."

    The mega-breach could create a headache for both companies, including damaging press, scrutiny from regulators and a user exodus, just as they're working to close the deal and figure out the future of Yahoo.

    1-10 of 30