You'll See This Message When It Is Too Late
Josephine Wolff
Highlights & Annotations
public attention, we often miss what happens after the perpetrators have successfully gained access to their target systems or data: who sues whom, who ends up paying whom, who changes their security setup,
Ref. A300-A
information system could also be likened to defending a large, thinly-populated territory like the nineteenth century Wild West: the men in black hats can strike anywhere, while the men in white hats have to defend everywhere.”1 This notion that defending all possible avenues of attack is more difficult than finding a single vulnerability in a computer system’s defenses resonates with the technical dimensions of defense. It makes intuitive sense given the diverse, ever-evolving set of possibilities for breaching technologies we rely on every day. However, in proposing their cyberattack “kill chain” model for dividing security breaches into a sequence of progressive stages, Eric Hutchins and his colleagues at Lockheed Martin, Michael Cloppert and Rohan Armin, contend just the opposite: “The adversary must progress successfully through each stage of the chain before it can achieve its desired objective; just one mitigation disrupts the chain and the adversary.” Because of this, they argue: “the defender can achieve an advantage over the aggressor.”2 How to reconcile these competing, contradictory viewpoints? How can it be true both that defenders are always at a disadvantage because they have to defend “everywhere” and that attackers are always at a disadvantage because they have to “progress successfully” through every stage of their planned attack to succeed? Which of these perspectives is applicable to any given incident—or stage of an incident—depends on a more nuanced assessment of the particular facts and structure of a given compromise. In this book, by revisiting major cybersecurity incidents of the early 21st century, we can start to define the circumstances under which each of these two framings of attacker and defender advantage is most useful—which stages and components of cybersecurity incidents are most susceptible to a well-placed intervention by defenders, and which stages require those defenders to block off so many parallel options and attack pathways that the attackers unquestioningly hold the upper hand. Or, put more simply: over the course of the lifecycle of a cybersecurity incident, at what points is it most easily and effectively disrupted or prevented and by whom? A central theme that emerges from this analysis is that the elements of these security incidents that are most susceptible to the kinds of interventions that cut off an entire stage of the attack and thereby halt the “kill chain” are often related to public policy and legal intervention. Meanwhile, the attack stages that, by contrast, most resemble the “Wild West” analogy, and therefore provide attackers with the greatest advantage, are those that relate to technical access to protected computer systems. Identifying the full range of possible opportunities for defensive intervention requires mapping out the entire sequence of events for a given cybersecurity incident. A 2011 study conducted by Kirill Levchenko and several collaborators looked at the best way to…
Ref. 8660-B
sequence of progressive stages, Eric Hutchins and his colleagues at Lockheed Martin, Michael Cloppert and Rohan Armin, contend just the opposite: “The adversary must progress successfully through each stage of the chain before it can achieve its desired objective; just one mitigation disrupts the chain and the adversary.” Because of this, they argue: “the defender can achieve an advantage over the aggressor.”
Ref. A3C0-C
How to reconcile these competing, contradictory viewpoints? How can it be true both that defenders are always at a disadvantage because they have to defend “everywhere” and that attackers are always at a disadvantage because they have to “progress successfully” through every stage of their planned attack to succeed? Which of these perspectives is applicable to any given incident—or stage of an incident—depends on a more nuanced assessment of the particular facts and structure of a given compromise. In this book, by revisiting major cybersecurity incidents of the early 21st century, we can start to define the circumstances under which each of these two framings of attacker and defender advantage is most useful—which stages and components of cybersecurity incidents are most susceptible to a well-placed intervention by defenders, and which stages require those defenders to block off so many
Ref. 6171-D
Identifying the full range of possible opportunities for defensive intervention requires mapping out the entire sequence of events for a given cybersecurity incident. A 2011 study conducted by Kirill Levchenko and several collaborators looked at the best way to reduce commercial email spam by tracing what the authors called the spam messages’ “click trajectory.” This trajectory represented the full lifecycle of the spammers’ operations beginning with sending spam emails, all the way through the messages being opened, the recipients clicking on the links in them, and the products advertised in them being purchased and shipped to customers.3 The researchers collected spam messages, visited the websites advertised in those messages, bought
Ref. D290-E
the pharmaceuticals, replica luxury goods, and counterfeit software sold through those sites, and, at each stage of the “spam value chain,” analyzed the feasibility of trying to intervene to cut off the spammers’ business. In particular, they looked for what they called “bottlenecks,” or “opportunities for disrupting monetization at a stage where the fewest alternatives are available to spammers (and ideally for which switching cost is high as well).”4 The researchers ultimately determined that rather than trying to take down the bots being used to send spam emails in bulk, or the hosting infrastructure underlying the websites those messages linked to, the most effective defensive intervention would be to crack down on the handful of banks that provided merchant services for the large majority of these transactions. That would entail encouraging the major global payment networks, such as Visa and MasterCard, not to allow certain types of transactions with the small number of banks known to be supporting spammers’ enterprises. This stage of the spammers’ process—the stage at which they processed customer credit card transactions—satisfied the characteristics of a bottleneck because so few banks were doing business with spammers (95 percent of the transactions the researchers tracked were settled by the same three banks) and also because the switching cost associated with it was high for spammers since setting up an account with a new bank would take days, or even weeks. The
Ref. 1A14-F
The analysis of past security incidents in this book is related in spirit to this analysis of the spam value chain: it is an opportunity to revisit these incidents with an eye to mapping out their entire trajectories and locating those “rare asymmetries” in favor of the defenders—the opportunities for intervention that constrain attackers by offering them the fewest, and most costly, alternative paths to achieve their ultimate goals. The subsequent case studies are analyzed with an eye to trying to identify similar bottlenecks in the lifecycles of those incidents, or opportunities for especially effective defensive interventions that would require perpetrators to make expensive, slow adjustments to their original plans.
Ref. 6433-G
This work, especially where it looks at financially motivated security incidents, owes much to the growing literature on the economics of information security. In a 2012 paper, Anderson and several collaborators provide a thorough breakdown of the different types of costs incurred by cybercrimes and compare the current spending on preemptive defense measures to indirect losses and spending on enforcement.7 Based on their estimates,
Ref. 3052-H
they conclude: “we should perhaps spend less in anticipation of computer crime (on antivirus, firewalls etc.)[,] but we should certainly spend an awful lot more on catching and punishing the perpetrators.”8 This important insight that the technical defenses against cybercrimes are overemphasized and receive disproportionate resources as compared to the non-technical mechanisms of law enforcement and policymaking motivates much of this analysis.
Ref. F190-I
The case studies analyzed here—which range from large-scale data breaches resulting in financial fraud to state-sponsored espionage to data dumps intended to publicly shame specific companies and victims—examine not just the technical controls that might (or might not) have effectively prevented the incidents from occurring. They also look closely at the range of non-technical defenses that might have been used to interfere with the perpetrators’ progress.
Ref. 2188-J
also co-authored with Rahul Telang, found that the passage of state data breach disclosure laws actually reduces the rate of identity theft by about 6.1 percent.
Ref. 352A-K
“The effectiveness of data breach disclosure laws relies on actions taken by both firms and consumers,” the authors point out, noting that in the aftermath of breaches many people affected ignore the letters notifying them of the breach or never bother to sign up for the credit monitoring services offered to them. “Firms can improve their controls; however, once notified, consumers themselves are expected to take responsibility to reduce their own risk of
Ref. 55A3-L
This highlights both the advantages and the drawbacks of each individual type of security policy—from policies that impose baseline protection measures to those that impose liability regimes to those that simply require victims to be notified of breaches of their personal data. Each of these forms of policy, individually, is inadequate and inefficient in some regards. This finding motivates the discussion of these case studies spanning both the preventive measures taken by targeted firms and the ex-post
Ref. 7B08-M
motivations behind nation-states that conduct cyberespionage for political and economic purposes builds on the impressive body of work by Ronald Deibert,
Ref. 0A6E-N
As a case in point, in July 2017, Russia passed a law banning Virtual Private Networks, often used to protect online users’ anonymity, and in June 2016, it passed the so-called Yarovaya law requiring telephone and Internet service providers to store user data for six months and assist intelligence agencies with decrypting that data. In November 2016, China passed a cybersecurity law requiring that information about Chinese citizens be stored on servers in China and not transferred abroad without permission—a measure that might hinder foreign governments from accessing the information but would almost certainly ensure that the Chinese government could access it as needed. The United States also has surveillance laws, but the focus of this book is on policies that govern defense against cybercrimes, not policies that regulate government access to data. While the U.S. government has not always pursued cybercrime policies focused on industry defense initiatives especially aggressively, it has done so to a greater extent than many other countries, making the U.S. a useful starting point for analysis.
Ref. D982-O
There is widespread consensus among information security experts that information systems should be secured through a combination of multiple different controls, a principle sometimes referred to as “defense-in-depth.”18 But the guidance governing how best to conjoin and layer such controls is often ambiguous and has focused almost exclusively on the combinations of technical defense measures that can be implemented by individual
Ref. C59D-P
Little attention has been paid to the range of technical and non-technical defenses that can be set up across different types of organizations with varying capabilities and motivations. Coordinating defenses across different stakeholders introduces significant new logistic challenges, but it also affords new insights into what it could mean to implement a comprehensive approach to computer security that incorporated every component of the network and addressed every stage of an attack.
Ref. 6D88-Q
Reconstructing the chain of events for individual cybersecurity incidents is relatively easier to do for older incidents than more recent ones. When breaches are first discovered and announced, often little is known about how they were carried out, or even what motivated them. Moreover, the involved parties may be wary of revealing any more details than absolutely necessary for fear of fueling negative publicity and lawsuits. It takes time for investigators to track down the root causes of major cybersecurity incidents—and it often takes even more time for the people involved in those investigations to be willing to talk publicly about their results, or for those findings to appear in court filings and public records. This is one advantage of revisiting older breaches: It is possible to reconstruct much more…
Ref. F943-R
Most importantly, revisiting past cybersecurity incidents allows us to trace how all of the different parties involved in the incident—from the targeted organizations to software manufacturers, Internet service providers, banks, web hosts, government agencies, and individuals—divvy up the cleanup costs and the blame afterwards. Tracking who gets held accountable in the aftermath of these incidents allows us to reconceptualize who is capable of being a defender and how. All of the groups and people who play some role in enabling successful cybersecurity incidents—whether that role is opening a phishing email or writing vulnerable code or conveying malicious traffic between computers or settling financial transactions or setting regulatory security guidelines—are potential defenders, in that they are capable of exercising certain types of controls and restrictions over attackers that are not available to anyone else. Often, the fallout of these breaches profoundly shapes the interactions among…
Ref. 2BE8-S
When we talk about (and report on and litigate) successful security incidents, too often our inclination is to latch onto the first or the most easily understood point of access—the phishing email, the dictionary attack, the unprotected wireless network—and harp on the simple line defense that seems like it would have made all the difference—two-factor authentication, or rate limiting logins, or better encryption of network traffic. But that perspective oversimplifies the much more complicated narrative of the gradual, escalating capabilities acquired by perpetrators, as well the much more limited and challenging environment in which individual defenders operate. The purpose of this book is to broaden and complicate that picture of who is—and should be—responsible for defending against cybersecurity incidents and to explore why answering that question has proved so difficult and requires more assertive policy-making.
Ref. DDC2-T
What the relatively brief but rich history of cybersecurity incidents in the early 21st century teaches us is that, ultimately, these incidents are not failures of our technology but instead failures to craft clear and effective liability regimes, failures to assign responsibility and blame for breaches in ways that appreciate and take advantage of the technical complexity and interconnectedness of the Internet, and failures to accept that cybersecurity is as much a problem of hastily written, out-of-date laws and policies as it is hastily written, out-of-date code.
Ref. 2970-U
Toey, and Scott found while war driving. WEP was known to be vulnerable to attack and less secure than the newer WPA, or Wi-Fi Protected Access, encryption protocol, but TJX had not yet updated the wireless security at that particular Marshalls store at the time of the breach.b Undoubtedly, that was a mistake—a serious, important, costly mistake—but it was not the only mistake that enabled the breach and subsequent fraud, nor was it necessarily the critical mistake that, had it not been made, would have averted the breach entirely.
Ref. 0200-V
To blame an extended, international, multistage financial fraud operation on a single, poorly protected wireless network is to fundamentally misunderstand how many different steps are involved in carrying out what Gonzalez and his co-conspirators achieved and to vastly oversimplify the task of defending against such breaches. But it was no accident or mere misunderstanding that led to the formulation and dissemination of the enduring lesson of the TJX breach: that businesses should upgrade from WEP to WPA. Rather, it was the result of a concerted payment card industry effort to place blame for the incident squarely on the shoulders of a specific actor (TJX) due to a specific action (failing to update wireless encryption).
Ref. 0DE0-W
entity. A closer look at the full timeline of the TJX breach reveals that the episode in fact involved several different technical vulnerabilities and companies, but by singling out one encryption protocol and one organization as fully responsible for the incident the payment card industry was able to effectively shield itself from bearing any of the blame. It was a lesson that would continue to shape the industry for years, long after Gonzalez and several of his co-conspirators had been arrested and imprisoned. In the decade following the TJX breach, the different sectors involved in the payment card industry—from retailers to banks to payment networks—expended tremendous effort and resources in trying to shift liability for security breaches and fraud onto each other, even as they often resisted implementing the newest, most secure versions of payment technologies.
Ref. CAF8-X
Resolving the question of who, or what, is to blame for Gonzalez’s success in perpetrating the TJX breach is not nearly so straightforward as simply pointing to the wireless network, or even pointing to TJX. These may have seemed the obvious culprits in the immediate aftermath of the breach’s discovery in 2007, but in the months and years that followed, as lawsuits were filed between the involved parties and more details of the incident came to light, it became increasingly clear that there had been several opportunities for interrupting Gonzalez’s scheme, both by technical means and through policy measures. Some of these were opportunities that TJX could or should have taken advantage of, but others were not. Indeed, some of the interventions that Gonzalez seemed to view as most threatening to his plans were well beyond the scope of TJX’s power, suggesting that the retailer may not even have been in the best strategic position to mitigate the large-scale fraud operation.
Ref. A741-Y
These include protecting store networks by eliminating wireless access, or restricting that access to known devices or devices with a shared key, or using stronger WPA encryption, as well as encrypting real-time transaction data, isolating card processing servers, and monitoring exfiltration of data from corporate and store servers.
Ref. 026F-Z
In the aftermath of the breach, much was made of these missing defenses, and TJX faced enormous criticism for its inadequate security. Lawsuits and media reports alike charged that TJX could have prevented the breach had it only implemented better technical defenses.
Ref. 129B-A
The Federal Trade Commission (FTC) also filed a complaint alleging that TJX had failed to appropriately protect the stolen data because the company: (a) created an unnecessary risk to personal information by storing it on, and transmitting it between and within, in-store and corporate networks in clear text; (b) did not use readily available security measures to limit wireless access to its networks, thereby allowing an intruder to connect wirelessly to in-store networks without authorization; (c) did not require network administrators and other users to use strong passwords or to use different passwords to access different programs, computers, and networks; (d) failed to use readily available security measures to limit access among computers and the Internet, such as by using a firewall to isolate card authorization computers; and (e) failed to employ sufficient…
Ref. 2C73-B
The FTC complaint does not claim that any one of these decisions, individually, would have constituted inadequate security; instead, it emphasizes that these five practices “taken together” were responsible for the allegations against TJX. The TJX breach is undoubtedly a case of failed security—the protective measures the company had in place were unable to prevent the thieves from stealing and selling millions of dollars’ worth of payment card information—but it is not a straightforward story of a company that should have been using WPA encryption or requiring stronger passwords or storing less data. In fact, it’s not clear that any of these measures would…
Ref. 43DE-C
It’s not hard to go down the FTC’s list and imagine how Gonzalez and his friends might have bypassed the “reasonable and appropriate” defenses TJX is chastised for not implementing: they could have circumvented WPA encryption by guessing or stealing the password of a store employee; the user passwords they stole from the store’s network to access corporate servers would not have been any more difficult to decrypt and use if they were stronger; and much of the card data the team sold was accessed and stolen during current transactions, rather than decrypted from the company’s stored, older records. None of that means TJX couldn’t—or shouldn’t—have done more to defend its customers’ data, but it does suggest that the technical controls the FTC and others faulted TJX for failing to implement might well have left the perpetrators room to maneuver and substitute different attack vectors. Clearly, for instance, the encryption of payment card data was not an insurmountable obstacle for the thieves since they were already planning to decrypt and sell the stolen data, even before they realized that the card numbers were briefly available unencrypted. They didn’t necessarily need to be able to join an open wireless network or access clear text card numbers in order to achieve their goal; exporting large volumes of data was a more essential capability for the attackers, though it was largely overlooked in the ensuing legal and media reports that focused primarily on TJX’s failure to implement WPA encryption or encrypt data stored prior to 2004. For a man saving up for a yacht, however, the most essential element of the breach was the capability to turn those stolen card numbers into cash—a process TJX had no insight into, or control over,…
Ref. 4B79-D
particularly worried about TJX interrupting his operation, but there were other parties whom he viewed with greater trepidation. In an online chat with Yastremskiy on March 2, 2006, Gonzalez wrote: [Gonzalez] I hacked [major retailer] and i’m decrypting pins from their stores [Gonzalez] visa knows [major retailer] is hacked [Gonzalez] but they dont know exactly which stores are affected [Gonzalez] so i decrypt one store and i give to you [Gonzalez] visa then quickly finds this store and starts killing dumps [Gonzalez] then i decrypt another one and do the same [Gonzalez] but i start cashing with my guys [Gonzalez] visa then finds THAT store and kills all dumps processed by that [major retailer] store [Gonzalez] understand? [Gonzalez] its a cycle [Yastremskiy] yes [Gonzalez…
Ref. 8E48-E
Gonzalez knew that his real challenge was not evading TJX defenses, but rather evading the payment networks, like Visa, that had insight into payment card fraud patterns and could tie those cases back to individual retailers. Setting aside the question of whether TJX could have had stronger technical defenses in place, the company had no means of detecting or monitoring the financial fraud being inflicted on its customers and no visibility into the consequences of its decisions. TJX learned about the breach from a credit card…
Ref. A488-F
It was a credit card company’s detection of patterns in fraudulent charges that led to the breach’s discovery. Regulations surrounding large international financial transactions forced the perpetrators to repatriate their profits in numerous, smaller increments through use of multiple cashers and couriers. Bank restrictions on maximum ATM withdrawals forced those cashers and couriers to carry suspiciously large numbers of cards (recall that Williams was arrested carrying eighty cards). Credit card expiration policies forced the thieves to discard the older, unencrypted data stored on TJX’s servers and instead spend time decrypting more recent data and finding ways to compromise real-time transactions. Banks, payment networks, and law enforcement officers all exercised considerable control over the extent to which credit card fraud could be carried out using the stolen data, as well as the ease with which Gonzalez and his co-conspirators could reap the profits of those sales. While the…
Ref. 2317-G
No one besides TJX could have stopped the thieves from accessing the payment card data, but lots of other defenders could—and did—play a role in limiting how much harm could be inflicted using that data.
Ref. FD28-H
TJX certainly played a large role in enabling the success of Gonzalez and his team, but the security controls it could have implemented—shared key authentication, WPA encryption, and data minimization, for instance—were not the lines of defense that Gonzalez was most concerned about. He could find other stores, other ways into the network, other decryption methods, and real-time data, but he couldn’t do anything about the banks that, in his words, “just said fuck waiting for the fraud to occur, lets just reissue EVERY one of our cardholders.” The effectiveness of the defensive measures available to these banks and credit card issuing companies depends upon both their broad visibility into incidents of financial fraud—that is, their ability to see patterns across large volumes of transactions—and the specificity of the threat they are responsible for defending against. Payment card fraud may begin with a poorly encrypted wireless network, a compromised point-of-sale terminal, or even a well-worded phishing email—and it is up to firms like TJX to protect against all possible access modes—but, ultimately, these schemes all take on a similar pattern in their later stages as the perpetrators sell their stolen information, relying on the card issuers and processing banks to ensure its value and their profits. In this regard, those card issuers have a significant advantage over TJX when it comes to identifying and stopping financial fraud: they know exactly what the criminals will do because there are a very limited number of ways to profit from stolen payment card information, even though there are many ways to steal it.
Ref. 6C47-I
Furthermore, payment card fraud requires a particularly involved and specific sequence of events that need to be successfully undertaken by criminals in order to profit from their activities—a clear example of the kill chain model of cybersecurity incidents. Both of these features give defenders an advantage…
Ref. 7275-J
The TJX breach is primarily remembered as a devastating failure of computer security and justifiably so—the defenses in place did not prevent the loss of hundreds of millions of dollars. But it was also, in its way, an incredible success story about the identification, arrest, and imprisonment of an international ring of cyber criminals who were caught thanks to a series of constraints imposed on them by both technology and policy working in concert. The Marshalls wireless network forced the thieves to sit in a parking lot for long periods with laptops and a radio antenna, attracting the attention of the police, selling the stolen data required the involvement of Yastremskiy, who eventually led investigators to Gonzalez’s screen name, and the restrictions on international financial transactions meant Gonzalez had to employ cashers and couriers, one of whom would later reveal his identity to the Secret Service. That process worked but it took years, and, despite the fact…
Ref. DC34-K
Microchip-enabled EMV cards (named for the payment processors who developed the technology standard: Europay, MasterCard, and Visa) directly address the root cause of breaches like the one that targeted TJX: millions of stored, reusable payment card numbers. The microchips, when inserted into payment terminals, generate a one-time code that is used to process just a single, specific transaction instead of the general card number. So if, later on, a database of those transaction records is breached, the information stored in it is useless to counterfeiters because each transaction code can be used only once (unlike card numbers, which are used again and again). Since these cards are designed to drive down large-scale fraud, and the costs of that fraud are shared among retailers, payment processors, and banks through a complicated process of interchange fees, implementing microchips would seem like an obvious point of agreement for all of the different parties involved in data breaches who were losing money on legal fees and fraud costs in their aftermath. And yet, rather than serving as a way for all these companies to join together to combat criminal activity, the implementation of EMV technology was rife with hostilities and finger-pointing, mirroring the aftermath of the TJX breach. Different firms involved in the transition to EMV cards turned out to be much more interested in how they could push costs onto each other than in how they could best fight fraud.
Ref. 0AF2-L
card issuer hadn’t bothered to provide its customer with a chip-enabled card, and the merchant therefore was forced to read the magnetic stripe instead, then it would be up to the issuer to cover the fraud costs. From the outset, this liability shift was not so much about reducing fraud as it was about making sure someone else had to pay for
Ref. 9CDD-M
On August 13, 2012, an employee at the South Carolina Department of Revenue (SCDOR) received an email with a link embedded in the message. She clicked on the link and, in doing so, unknowingly downloaded malware onto her work computer in the state government. Two weeks later, someone used her username and password—presumably collected by means of that malware program—to log into her SCDOR work account remotely.
Ref. 714E-N
In the aftermath of the SCDOR breach’s discovery in October 2012, blame would be cast in several different directions by investigators, reporters, government officials, and victims arguing over what, specifically, had gone wrong and who was responsible for failing to stop the breach. In fighting about what specific technical measures should have been in place at the SCDOR and whose job it was to ensure that they were implemented, the South Carolina government and its critics quickly lost sight of the larger context and chain of events leading up to the breach, focusing almost exclusively on one or two technical fixes they believed would have made all the difference. They ignored myriad other opportunities for defensive intervention, as well as the range of potential stakeholders who could have played a role in preventing this particular breach.
Ref. BD64-O
When it comes to data breach postmortems, breached entities are often more eager to deflect blame than investigate root causes, and the state of South Carolina was no exception.
Ref. 7C23-P
And yet, even after it became clear that this incident was far from unavoidable, the lawmakers investigating the breach quickly fell into another trap, one arguably even more insidious than denying all culpability—namely, pointing to one particular technical measure and insisting that it would have prevented everything that happened and would, therefore, be sufficient to rely on for security moving forward. To understand why both of these assertions—that there was nothing anyone in the South Carolina state government could have done to defend against the breach and that it could have been easily prevented by the implementation of one silver bullet security technology—are so profoundly misleading, it is helpful to walk through what actually happened in South Carolina at the end of the summer of 2012 and consider the many missed defensive opportunities.
Ref. 8968-Q
But even had all of these protections been in place, it would still have been possible for the intruder to acquire an employee’s login credentials by other means, including guessing common passwords, purchasing stolen credentials from other websites in hopes that employees had reused passwords at work, or even intercepting network traffic, as Gonzalez and his co-conspirators did outside the Miami Marshalls stores. In other words, the success of the phishing email was not central to the SCDOR intruder’s ultimate aim—it was just one of many possible tools open to him for stealing credentials. Blocking that step might have stopped him in his tracks—but it might also have simply forced him to shift course slightly.
Ref. 0CAD-R
This initial intrusion after the successful phishing of employee credentials suggests yet another set of possible defensive measures. For instance, the SCDOR could have restricted the use of remote access capabilities so that its servers could not be easily accessed from outside the offices. Alternately, it could have instituted more stringent authentication requirements, including two-factor authentication, which requires users to provide an additional code or login factor besides their passwords to complete authentication.a Such measures might have proved more effective impediments to the intruders than trying to stop the phishing email from being delivered or opened, but they, too, would have provided no guarantee that the attackers would not find another means of access.
Ref. 7F8B-S
especially given the nature of the theft. Tax records, whether or not they are encrypted when stored on state servers, have to be accessible in their decrypted, clear text to some state employees and auditors. So at least some number of employees must be granted access to the data in its unencrypted form, and the typical means for that access is for them to enter their credentials to decrypt the data. In other words, an intruder who took the phishing approach used in the SCDOR case and accessed stored data by means of stolen credentials might well have been able to use those same stolen credentials to decrypt the data had it in fact been encrypted.
Ref. 0E81-T
This inclination to pinpoint a particular missing line of defense as the one crucial element that would have prevented a breach is not unique to the aftermath of the SCDOR breach. Media coverage of a 2014 breach of JPMorgan Chase followed a similar narrative, with investigators and reporters focusing on a network server at the bank that had not been upgraded to require two-factor authentication.21 And failing to implement multi-factor authentication is not the only thing organizations are taken to task for in the wake of breaches that make use of insecure credentials. When several celebrities had naked photos stolen from their Apple iCloud accounts in 2014, critics blamed Apple’s failure to rate limit unsuccessful login attempts in order to prevent adversaries from guessing passwords by brute force.22 These criticisms are not necessarily wrong—it may well have been the case that Apple and JPMorgan and the SCDOR could have interrupted their breaches by implementing some of these relatively straightforward safeguards. But, importantly, none of these individual defenses would necessarily have prevented the breaches that occurred—and blaming the incidents on their absence belies a deep misunderstanding of the many different pathways into computer systems that are open to intruders. The tenor of this type
Ref. F18E-U
which people lay blame for a breach on a particular missing technical defense, often implies that these stronger protections would have automatically prevented the attacks, rather than simply rerouting the attackers through different pathways. This single point of failure fallacy assumes that, of the myriad different ways in which an attack like the one perpetrated against South Carolina might have been defended against—phishing protections, remote access restrictions, limits on data exfiltration—it was the absence of multi-factor authentication (or encryption) that most clearly indicated negligence and inadequate security. This tendency to single out institutions’ earliest failures in the sequence of attacks, whether those include adequately protecting authentication credentials (as in the case of the SCDOR) or encrypting wireless networks (as in the case of TJX), is a recurring theme of these postmortems. The strongest focus often falls on early-stage preventative measures, such as multi-factor authentication, rather than later-stage harm mitigation and monitoring efforts, such as restricting outbound traffic. What is most dangerous about this line of thinking is its implication that organizations that do encrypt their sensitive information, or do use multi-factor authentication, are safe from intrusions—or, at the very least, have done their due diligence when it comes to security. The question of what constitutes cybersecurity due diligence (and who gets to define it) is a difficult one. For the purposes of assigning liability, it’s important to be able to distinguish between targets who were breached because they were negligent in protecting their computer systems, and those who were breached because they were up against incredibly sophisticated, well-resourced adversaries who would ultimately have compromised their systems regardless of how many safeguards were in place. The former group would, ideally, bear more responsibility for what happened and face greater penalties than the latter, but it’s far from straightforward figuring out where to draw that line. Should a state department of revenue that complied with IRS requirements for data storage be considered negligent? If so, what exactly would they have had to do to avoid that designation?
Ref. 35A8-V
But the implicit assumption in taking the SCDOR to task specifically for failing to encrypt its data or use stronger authentication methods—or even in taking the IRS to task for failing to mandate encryption—is that there is some commonly accepted set of “best practices” for security that will prevent most breaches and that everyone should know to implement. Company privacy and security policies frequently offer deliberately vague descriptions of security practices that similarly subscribe to this idea, reassuring customers that their data will be protected through the use of “reasonable safeguards” or “appropriate measures,” as if these were anything more than entirely subjective, malleable designations. While there are some accepted best practices in this space, there is also considerable disagreement over which of those tools are actually effective. The recommended list of essential security controls varies significantly over time and depends on who is asked to compile it. That lack of consensus, coupled with relatively little empirical data on the impact and effectiveness of different security controls, has made it all too easy to point to any individual missing security control—encryption, for instance, or multi-factor authentication—as the crucial thing that would have prevented a breach, even in the absence of any clear evidence that it would necessarily have done so. At the same time, these subjective, ever-shifting expectations for what organizations ought to be doing to protect their data have contributed to the uncertainty and ambiguity of the lists of security standards put out by government agencies like the IRS, as well as industry consortiums.
Ref. BB13-W
When it comes to hardware and software manufacturers, the U.S. Computer Fraud and Abuse Act grants immunity to those companies so that they cannot be sued when vulnerabilities in their products are discovered and exploited. The reasoning behind this policy is, essentially, that it would be impossible for those companies to find and fix all of the vulnerabilities in their products—since there’s no such thing as perfect security. In granting them immunity, Congress reasoned that if the threat of being sued for every vulnerability or security flaw hung over them, then no one would ever manufacture any software or hardware. But for the much larger number of…
Ref. 28DD-X
These lists range from standards developed by private companies and consortiums, like the Payment Card Industry Data Security Standards that applied to TJX since it accepted credit card payments, to standards developed by government entities like the IRS and the National Institute of Standards and Technology, to those developed by international…
Ref. 9611-Y
So, when Haley faulted the “archaic” IRS security standards for the SCDOR breach, she was not just trying to deflect blame for the incident onto a different organization, she was also invoking the broader questions of how much security is enough, and who gets to decide. Without answers to those questions it is almost…
Ref. B87B-Z
Merely accusing all breached organizations of having failed in their duty to protect data is both unfair and unproductive. It loses sight of the fact that some skilled adversaries may be able to outwit even quite sophisticated security setups and also encourages companies to hide their breaches, or never start monitoring for them in the first place, for fear of being required to report them and suffer the consequences. And yet, providing meaningful,…
Ref. 3073-A
In the United States, the task of determining when a company has failed in its duty to secure customer data falls largely to the Federal Trade Commission (FTC), the government agency charged with holding businesses accountable for engaging in “unfair or deceptive” practices (and the same agency that…
Ref. 6CAC-B
On top of that, there is still relatively little empirical evidence demonstrating the effectiveness of these different types of security controls when it comes to preventing breaches—many security firms publish data on how many connection attempts or probes their products block or detect, but those numbers are not always reliable indicators of the likelihood of actual security breaches occurring.
Ref. D9A3-C
Perhaps most challenging of all for people developing security standards is the fact that different organizations face different threats, store different kinds of data, and may have very different priorities and approaches to securing their computer systems, so it doesn’t necessarily make sense to impose a one-size-fits-all set of requirements. There are even security benefits to a diversity of approaches—if everyone used exactly the same security tools, then an adversary who found vulnerabilities in one organization’s defenses would likely be able to exploit those same weaknesses in many other companies’ networks as well. Blaming ambiguous, insufficient, and
Ref. 079B-D
Even as they aim to provide organizations with some flexible guidance about how to protect data and networks, security standards and the groups issuing them often end up providing cover for breach victims—as well as other standards-setting organizations. These standards can quickly become a vehicle for breached parties, like the SCDOR, to blame someone else for a missing security control and further focus attention on the role of specific, early-stage preventive security controls rather than later-stage mitigation efforts targeted at preventing criminals from successfully monetizing the data they have stolen.
Ref. 2932-E
no time pressure and had several different possible paths to financial gain, ranging from filing fraudulent tax returns to opening new accounts and loans in the victims’ names to initiating direct transfers from the victims’ bank accounts using their routing information. Tax records are not the only type of data that offer criminals this range of profitable activities—for instance, health records, which often contain similar information (Social Security numbers, addresses, billing information), can provide comparable opportunities for financial fraud, with the added possibility of health insurance fraud.
Ref. ADE2-F
cancelations and withdrawal limits. Like the TJX breach, the SCDOR breach involved several stages—from the initial phishing email through the exfiltration of the stolen data to external servers—and there were potential defensive interventions that might have been effective at each stage of the intrusion. Blame for the breach was quickly confined to one or two of these missing interventions—namely, encryption and two-factor authentication—by media reports and state lawmakers, while Haley faulted the IRS data protection standards for the absence of these defenses, and the IRS, in turn, invoked the NIST security standards, which did include encryption. This circular deflection of responsibility to different government and industry guidelines highlights the challenges of trying to assemble a comprehensive set of security standards. Those challenges create a fertile environment for the targets of security breaches to shift blame onto the organizations that they feel have failed to tell them how to protect themselves. The SCDOR breach demonstrated how easily and quickly a breach could come to be blamed on a few specific technical controls and, simultaneously, how incredibly difficult it is to assign financial liability to any of the involved parties in the absence of robust, specific standards or concrete, immediate harm. Note
Ref. 6EE8-G
GOZ could even access bank accounts protected with two-factor authentication that required users to enter not just a password but also another one-time code delivered by text message or physical token. (This was the same technology that South Carolina was faulted for not purchasing prior to its breach.) Using a man-in-the-middle attacka to intercept messages between the victim and their banking site, GOZ would present its victims with a fake login window that looked identical to their bank’s real login webpages. People trying to log in with two-factor authentication would receive their second authentication code over their phones or by token and then enter it directly into the fake login field. The GOZ malware would capture that authentication code and immediately send it back to the servers controlled by the GOZ operators using the instant messaging service Jabber, and the GOZ operators could then combine it with stolen passwords to access the targeted accounts protected by two-factor authentication.8
Ref. E313-H
that Bogachev and his associates would be unable to reestablish control over the infected machines through a new server using the domain names generated each week by the DGA. The FBI worked with a number of security researchers to reverse engineer the DGA and figure out how it generated the list of domain names it provided to the infected computers every week. That way, the FBI would know which thousand domains were being selected every week in advance. Then, right before the takedown, they acquired a temporary restraining order that required domain registries in the U.S. to redirect any attempts to contact those thousand domains to a substitute, government-run server. Since the domains generated by the DGA with the .ru top-level domain were not controlled by registries in the US, but rather by companies in Russia, the order also required U.S. service providers to block any connection requests to the .ru domains generated by the DGA.
Ref. 9701-I
From a criminal’s perspective, the most difficult, or riskiest, stages of cybercrimes have typically been the ones that come after the perpetrator has already successfully stolen data from a protected computer. Finding a way into a computer system to steal data is comparatively easy. Finding a way to monetize that data—making sure that credit card companies don’t cancel all the cards whose numbers you’ve stolen before you have a chance to sell them, or identifying buyers willing to pay a good price, or hiding those profits from the police—can be much harder. Financial intermediaries are easy targets for law enforcement because their records often show a clear money trail between the victims and the perpetrators. The system of money mules utilized by Bogachev was slightly less risky than the cashers employed by Gonzalez because it did not require transporting any cash back into the United States, nor did it rely on the sale of stolen information through a centralized and well-known black market dealer like Yastremskiy who was likely to attract the attention of the police.
Ref. A417-J