Four people standing in a semi-circle, looking down with grave expressions. Two men are dressed in plain, navy blue suits. The remaining two are well dressed: a woman and a man, clearly business professionals, probably executive management.
“I’ve never seen anything like this in all my years as an investigator,” says one of the men in a cheap suit. The other man remains impassive and grim behind his aviator sunglasses.
The woman lowers her forehead to the palm of her hand and shakes her head. “You hear about this once in a while in the news, but you never think it’ll happen to you.”
Her colleague nods his agreement, reflecting on the situation. “A healthy regimen, regular testing, and you fall into the trap of believing it will be old age that delivers the final blow. But then a simple infection takes down the the strongest.”
Pan out, revealing the computer screen they’re all gazing at. A stock ticker shows steady growth for most of the diagram on the screen, followed by a steep fall at the end.
The investigator reaches out and turns off the monitor. “Staring at it won’t bring it back, folks. You did all you can: formal processes for system hardening and patching, penetration testing, application firewalls–you followed the defense in depth playbook to a ‘T’. But even Achilles had a weakness. You’ve been compromised.”
The second investigator whips off his sunglasses. “Damned hackers and their advanced persistent threats” he sneers.
Well maybe it’s not quite that dramatic, at least not the denouement of a successful attack; yet it can be just as devastating to your business. Many executives are more concerned about negative publicity than monetary fines when it comes to computer security. With the spate of information security compromises in the last couple of years, earning 2011 the title of “Year of of the Security Breach”by IBM’s X-Force in their Trend and Risk Report, coupled with mandatory reporting requirements for private records exposure, such as the new rules proposed by EU Justice Commissioner, Viviane Reding, and HITECH’s requirement in the US, organizations are feeling pressure from both their attackers and legislative bodies.
The fictional scene above is taken from a real-life case, where 35 million customer records were stolen from a service provider. Ironically, they had near perfect security controls. So good, in fact, that the attackers ended up compromising a third party who provided utility software, and trojaned their product. The primary target didn’t detect the malware since it was specifically written for the specific attack circumstance, and no anti-malware solution had a signature for it. Adding insult to injury, the target’s own patch management process ended up being an effective mechanism for thoroughly distributing the malware, pervading their environment.
It would seem that resistance is futile: no matter what you do, a persistent and creative attacker will find a way to compromise your systems. That doesn’t mean all is lost: compromise is not the problem; data theft, including surveillance, and data destruction are. Just like biological infection, the introduction of a foreign organism isn’t the problem; we live with plenty of parasitic entities that cause no consequential damage. Most of your relatives’ computers are bot infected, but until they receive a directive from their command and control server, they’re relatively harmless.
That’s not to say you should live with a freeloading tenant piece of malware and wait for it to go rogue. But when your defenses fail to both keep out and keep in badness, you’re next line of defense is the ability to detect it. The purpose of security intelligence is to identify anomalous behavior in your environment early enough to stem the potential damages.
And that’s what I’ll be talking about at RSA Europe 2012, at the Hilton London Metropole: “Staying Out of the Headlines with Security Intelligence”. The presentation is Thursday, October 11 at 14:40 – 15:30pm (session ID: DAS309) and I hope to see you there.
If you’re not going to be at RSA Europe, check out my latest webcast with SCMagazine UK.
In Star Trek I never saw Dr McCoy texting Spock or playing Angry Birds on the medical tricorder, which may be why I never saw him swearing over it because he had to type in an eight-digit passcode because of an MDM policy. Bones would just wave it over someone and poof!—instantaneous and accurate diagnostic results. No mobile malware to slow it down or exfiltrate ePHI.
Today’s tricorder technology, tablet computers and smart phones are helping transform the health care industry by providing anywhere, anytime access to electronic medical and health records (EMR/HER). Even now changing dentists often means having paper records copied and snail mailed from one practice to another. And since x-rays are difficult to copy, the gaining practice had to take a whole new set of films. Today x-rays can be produced digitally, copied at will with no loss of fidelity, and transferred as soon as they’re taken—no manual processing needed—and delivered to the doctor on a PDA tablet.
Technology is enabling not only interconnected health data, but a connected health experience for the patient, caregiver, payers, and pharmacies. Immediate results mean less anxiety for patients, and quicker diagnoses and treatment plans. Connected health also means caregivers can monitor patient outcomes even after they’ve left the clinic or been discharged from the hospital, allowing all parties to interact toward a more timely recovery and avoid mistakes.
But this utility is encumbered with a heavy security burden. In fact, the U.S. Department of Homeland Security classifies the heath care system as national critical infrastructure. In addition to the same threats that all organizations face, health care organizations have to protect patient data, and there’s cause to be concerned just from the headlines in the last year or so:
- 34,000 patient files were compromised when a contractor’s laptop was stolen from his car
- A hacker in Eastern Europe broke into a state owned computer in Utah and stole 800,000 records–more than 1/4 of the state’s population
- Backup tapes were stolen from a health insurer for the military. 5 million patient records were compromised, the biggest health data exposure to date
- A hospital insider accessed patient records of a period of 17 months and sold them
The health care industry is a lot like the security industry: no one wants to have to call on either, and we often wait until it’s too late to invest in both health care and information security. Health care is currently focused on early detection, and while we profess to have the same goal in information security, it’s clear that we’re not doing a great job: the majority of system compromises go unnoticed for months, according to the 2011 Verizon Breach Report. A once-a-year check up is too infrequent and gross a test to catch all but the most obvious ailments, the same way manual log analysis is ineffective at early detection of attacks.
We have an advantage in information security over health care, though: the capability to perform continuous and non-invasive monitoring. Medical diagnostics are getting less traumatic with transcutaneous blood-gas monitors, dielectric and near-infrared spectroscopy for blood glucose monitoring, ultrasound for cardiac and fetal development monitoring, but we’re a long way from the medical tricorder.
But predictive analytics can be applied to determine whether consumers are making smart food choices. Health providers and payers could collaborate to provide discounts to patients who continuously eat to stay healthy, using supermarket loyalty cards to track food purchases, and WiFi-connected heart rate monitors to establish a pattern of exercise and record vital statistics on a frequent basis. These are only a couple of possibilities: there’s a practically endless supply of ideas that could fuel a whole industry, contributing to not only a healthy population, but a healthy economy.
IBM’s own Watson is in the process of retraining from being a Jeopardy champion to a medical diagnostician. Watson is able to ingest patient history, compare diagnostics with other patients with similar symptoms and backgrounds, assimilate new research from unstructured sources such as medical journals, and arrive at a more accurate diagnosis more quickly than most doctors.
All of this should percolate into a health dashboard, available to patients, caregivers, payers, and goods manufacturers, with different levels of detail based on role and associated need to know. The system has to be transparent and intuitive, and security needs to be baked in, not added on.
Healthcare has more complex requirements than many other industries. They’re not only concerned with common threats like script kiddies, malware, and hacktivists—especially given the political climate in the use around the health care reform law; healthcare organizations also have to protect electronic protected health information (ePHI) against exposure. This requires vigilance against deliberate records fraud as well as accidental leakage of personal information. For example, a clinician may access the records of a celebrity admitted to the hospital to sell the diagnosis to the media, or looking up their neighbor’s health history out of curiosity or for ammunition in a clan feud. Identification of this broad set of threats requires total visibility and sophisticated analytics only found in Security Intelligence.
It’s appropriate that the same type of analytics that can be used to monitor health choices and diagnose medical conditions can also detect exposure of ePHI and medical fraud. They both involve consuming enormous amounts of wildly diverse data, interpreting it in the context of the problem at hand, and correlating seemingly unrelated information to yield an accurate and actionable conclusion. Said otherwise, they both involve the application of intelligence, which will transform the healthcare industry just as it has for security.
To learn more about the transformation of health care with security intelligence, read Chris’s article at SecurityWeek.
The CIA has The Farm, a secret facility somewhere in Virginia, where it trains agents in wiretapping, interrogation, and handling human “assets”. Similarly, the GTRA (Government Technology Research Alliance) convenes in remote Bedford Springs, Pennsylvania, roughly halfway between DC and Pittsburgh, in a hotel that looks like The Overlook from The Shining. Instead of how to poison an enemy operative, though, the federal delegates discuss cyber-security and collaboration between the government and industry.
I spent Sunday through Tuesday a couple of weeks ago exchanging ideas with the best and brightest in the public sector at roundtable meetings, on a panel entitled “How to Drive Efficiency and Improve Security“, and mingling in between sessions and at the Havana Nights after-hours soiree. Top of mind concerns echo those in the private sector, including secure mobile device and cloud strategies, and doing more with less. Federal agencies are also concerned with Continuous Monitoring, an initiative I’ve written about in the past here. While the private sector doesn’t have to comply with a government regulation mandating yet another set of security controls, the end of the government’s fiscal year is fast approaching, heralding the need for meeting compliance deadlines, and security managers are looking for answers on how to meet the deadline.
According to NIST SP 800-137, “Information security continuous monitoring (ISCM) is defined as maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.” This is close to the definition of Security Intelligence, which provides actionable and comprehensive insight for managing risks and threats, from protection and detection through remediation. Core to continuous monitoring is centralized event management, situational awareness—aka, context—and analytics, to reduce the onslaught of data into discrete, manageable, and actionable actions.
Many of the GTRA delegates are trying to reconcile the ambiguity in the continuous monitoring guidance and the confusing array of solutions offered by the security technology industry. Within SP 800-137, the terms “continuous” and “ongoing” are not prescriptive; instead, they are defined to “mean that security controls and organizational risks are assessed and analyzed at a frequency sufficient to support risk-based security decisions to adequately protect organization information. Data collection, no matter how frequent, is performed at discrete intervals.” Once organizations come to grips with what the terms mean to them, what needs to be monitored? Just logs from security technology like firewalls and IPSes? How about network activity? And where do they get data about external threats to add situational awareness?
The advice that I give is that it all starts with a strategy. Don’t create your security posture around the 800-137 controls; map them to your mission objectives and the security initiatives that support them. A strong security posture will often end up with total coverage and incremental security goals. With this roadmap in hand, you can start planning and organizing activities, and get started. Remember, an effective security program is constantly evolving, so the end state is not final; you don’t have to get it perfect the first go-round. And if you don’t take the first step, it’s guaranteed that you won’t succeed in complying with the Continuous Monitoring mandate.
The same is true in the private sector, whether you’re subject to government regulations like SOX or contractual obligations like PCI DSS. In many cases organizations are subject to multiple compliance mandates, and many of them have overlapping controls. Map them to each other and the union of all controls should map to organizational goals and security initiatives. As you meet the controls that intersect, you’ll quickly start to fulfill the obligations of many compliance mandates at the same time.
Even with a solid plan, government agencies are struggling with how to become or stay secure with in an increasingly complex threat landscape, with less budget and resources. The panel was asked how private industry is helping to stretch federal budgets while at the same time improving security. My view, particularly after talking with security managers, CISOs, and CIOs in government agencies, is that the complexity of existing security solutions, comprising dozens of technologies from as many vendors, is both expensive to purchase and maintain, is not effective at stopping determined attackers, and is confusing the means to achieve compliance with continuous monitoring. The answer is to evaluate the existing profusion of security technology, eliminate ineffective products, and consolidate where possible. The key to making these decisions is to monitor and measure, and the solutions that provide that capability will also give visibility to agencies, allowing them to fulfill a large part of the obligation toward Continuous Monitoring.
Government decision makers recognize this and asked during the executive meetings whether SIEM can replace some of the existing security technology. There seems to be some confusion as to what SIEM is and what it can do, as many of the roundtable attendees were there to get an orientation on the capabilities of QRadar and Security Intelligence. Some agencies don’t have SIEM at all, some have basic log management solutions, and others have first generation SIEMs that simply have not lived up to the promises made at purchase. The results were positive, the proof being that Q1 Labs/IBM was nominated as the “Best Continuous Monitoring Round Table” award. It’s gratifying to be validated from the members of GTRA, some of the most strategic and advanced leaders in federal government.
In the final analysis, the agreement about how the public and private sectors can collaborate to improve efficiency and security is to let the government work on integrating agencies and let industry work on integrating technology. Because there is a wide range of requirements in both the private and public sectors, the solutions must be flexible enough to adapt to diverse processes. Q1 Labs has been in the business of continuous monitoring for almost a decade–long before the government initiative. And now, with the entire IBM Security Systems portfolio, we have the most comprehensive security offering, integrated to reduce the total cost of ownership.
We look forward to our continued relationship with GTRA and evolving our security solutions to meet the needs of both the private and public sector, combining the research and development resources of IBM and the feedback of the entire GTRA Council.
This week the security blogs have been abuzz about Flame, the newly discovered malware that appears to be geographically targeted at Iran, Lebanon, Syria, Sudan, and other countries in the Middle East and North Africa. Security analysts are infatuated by the “massive, highly sophisticated piece of malware”, and are likening it to Stuxnet and DuQu.
I’m not so impressed: I believe we’re seeing the beginning of a long line of copycats, and Flame is a klunky primate of the next stage in the evolution of advanced malware; it’s just another generation in the APT ontogeny.
In fact, IBM’s X-Force analysis concludes that, “At this time, Flame appears to be limited to a very small geography, primarily certain countries in the Middle East, and does not appear to autopropagate. This malware appears to be highly targeted and designed to infect a minimal number of specifically targeted individuals. Consequently, the immediate threat from this malware, in the general network population, remains very very low despite its high profile in the press.”
Stuxnet was a new breed of threat, created as a cyber weapon to serve the goals of a government or alliance. It was sophisticated and keenly targeted. Duqu was a cousin with a slightly different purpose–to gather intelligence–although still targeted at industrial control systems. Flame does not appear to share the same code base and is not targeted at any particular industry. You can read all about its capability to capture screenshots, turn on the microphone or video camera on computers and laptops, and other features elsewhere; there’s no need to clutter up the internet with redundant descriptions.
Flame may not share the same ancestry, but it’s just another piece of malware with a primary purpose of cyber espionage. It also looks like it’s extendable via plug-ins. When you gather all the modules that comprise Flame, it encompases 20 megabytes. Stuxnet, by comparison, was one-half of a megabyte. In many cases software goes through an optimization stage in its development, particularly for drivers, code that has strict performance or size requirements, or other specialized runtime requirements–like stealth. Stuxnet and DuQu were compact and efficient; whereas, although Flame has a broader purpose, it most likely has not been subjected to a rigorous optimization effort. This suggests a general-purpose application rather than a targeted weapon. So maybe the cyber criminals have taken a lesson from Stuxnet, and Flame is intended for black-market hire like most botnets, with plug-ins tailored for all sorts of nefarious objectives.
Flame was discovered almost two years after Stuxnet, but there’s speculation that they may have been under development in parallel. Files with the same names as those found in Flame were discovered on machines as early as December, 2007, and April, 2008. There are many possibile relationships between Stuxnet and Flame that begin before the pubic became aware of either. There are underground forums for malware developers and it’s entirely possible that the architects of both Stuxnet and Flame frequented those venues, where the ideas for both were sparked, perhaps in tandem, or they may have collaborated on challenges as anonymous peers with one effort predating the other.
Regardless of their possibly divergent parentage–a government project and a framework for cyber crime–their development timeline, and their narrow or broad purposes, Flame is not a game-changer. We learned the lessons from Stuxnet and they apply directly to Flame. It’s a good reminder that DuQu wasn’t the last of its kind, but now it’s time to carry on, albeit with a newly heightened sense of awareness, and get back to business.
Allan Paller of the SANS Institute had a few interesting things to say at the ISSA-LA’s Security Summit IV, but two struck me as incredibly salient. The first is that CEOs actually do understand the importance of information security. I’ve heard security experts–smart and well-respected ones–utter that executive management doesn’t “grok” security. That’s true, but they don’t need to grok it; that’s the responsibility of us who inhabit the world of zero-days and hacktivists and APTs. CEOs need us to analyze and summarize our knowledge and present it to them in a business context. The problem isn’t just that we in security generally don’t speak the language of the boardroom, we simply aren’t wired the same. Security practitioners are a risk-averse group, by and large; CEOs are risk managers.
Which makes sense: CEOs are responsible for growing the business and there’s no reward without risk—hopefully well-calculated risk. We don’t want our executives pumping tokens into slot machines in Vegas hoping to hit it big. On the other hand, we don’t want them stuffing the cash from revenues into their mattresses. So when they decide to invest in new market opportunities or augment the current business model using technology, they want to be on the safe side of the risk threshold—but just barely.
But security folks’ impulse is to grab the business stakeholders by the shirt collars and drag them away from that scary precipice. We’re much like lawyers in that way. Their job is to minimize liability, a form of risk, optimally to eliminate it with the fabled iron-clad contract. Of course with lawyers it’s as much a negotiation tactic as dogma; each party stands on opposite sides of an issue with backs to their own walls, fully knowing they’ll both end up somewhere in the middle.
But security is not at odds with the business; it’s not a negotiation between the two parties. Our job is to determine appropriate responses and come to the table with the best, most informed decision possible with the given data. We need to find a happy middle between a purist security stance that discourages new initiatives (e.g., cloud, BYOD, partner portals, etc.), and a Wild West approach where the business does whatever it wants without addressing risk — and present that to executive management. They need to trust that we understand the business and are helping them to make the right risk management decision. Remember, “defend” is not the only response to a threat; other mitigating controls include transferring risk and accepting it.
Alan also said that CEOs want to know “how much is enough.” This is the heart of the matter. Finding the center of gravity that lets the business grow and thrive is the key to transforming the perception of information security from a cabal of naysayers to trusted risk analysts and business enablers.