Posts Tagged ‘Federal’
The CIA has The Farm, a secret facility somewhere in Virginia, where it trains agents in wiretapping, interrogation, and handling human “assets”. Similarly, the GTRA (Government Technology Research Alliance) convenes in remote Bedford Springs, Pennsylvania, roughly halfway between DC and Pittsburgh, in a hotel that looks like The Overlook from The Shining. Instead of how to poison an enemy operative, though, the federal delegates discuss cyber-security and collaboration between the government and industry.
I spent Sunday through Tuesday a couple of weeks ago exchanging ideas with the best and brightest in the public sector at roundtable meetings, on a panel entitled “How to Drive Efficiency and Improve Security“, and mingling in between sessions and at the Havana Nights after-hours soiree. Top of mind concerns echo those in the private sector, including secure mobile device and cloud strategies, and doing more with less. Federal agencies are also concerned with Continuous Monitoring, an initiative I’ve written about in the past here. While the private sector doesn’t have to comply with a government regulation mandating yet another set of security controls, the end of the government’s fiscal year is fast approaching, heralding the need for meeting compliance deadlines, and security managers are looking for answers on how to meet the deadline.
According to NIST SP 800-137, “Information security continuous monitoring (ISCM) is defined as maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.” This is close to the definition of Security Intelligence, which provides actionable and comprehensive insight for managing risks and threats, from protection and detection through remediation. Core to continuous monitoring is centralized event management, situational awareness—aka, context—and analytics, to reduce the onslaught of data into discrete, manageable, and actionable actions.
Many of the GTRA delegates are trying to reconcile the ambiguity in the continuous monitoring guidance and the confusing array of solutions offered by the security technology industry. Within SP 800-137, the terms “continuous” and “ongoing” are not prescriptive; instead, they are defined to “mean that security controls and organizational risks are assessed and analyzed at a frequency sufficient to support risk-based security decisions to adequately protect organization information. Data collection, no matter how frequent, is performed at discrete intervals.” Once organizations come to grips with what the terms mean to them, what needs to be monitored? Just logs from security technology like firewalls and IPSes? How about network activity? And where do they get data about external threats to add situational awareness?
The advice that I give is that it all starts with a strategy. Don’t create your security posture around the 800-137 controls; map them to your mission objectives and the security initiatives that support them. A strong security posture will often end up with total coverage and incremental security goals. With this roadmap in hand, you can start planning and organizing activities, and get started. Remember, an effective security program is constantly evolving, so the end state is not final; you don’t have to get it perfect the first go-round. And if you don’t take the first step, it’s guaranteed that you won’t succeed in complying with the Continuous Monitoring mandate.
The same is true in the private sector, whether you’re subject to government regulations like SOX or contractual obligations like PCI DSS. In many cases organizations are subject to multiple compliance mandates, and many of them have overlapping controls. Map them to each other and the union of all controls should map to organizational goals and security initiatives. As you meet the controls that intersect, you’ll quickly start to fulfill the obligations of many compliance mandates at the same time.
Even with a solid plan, government agencies are struggling with how to become or stay secure with in an increasingly complex threat landscape, with less budget and resources. The panel was asked how private industry is helping to stretch federal budgets while at the same time improving security. My view, particularly after talking with security managers, CISOs, and CIOs in government agencies, is that the complexity of existing security solutions, comprising dozens of technologies from as many vendors, is both expensive to purchase and maintain, is not effective at stopping determined attackers, and is confusing the means to achieve compliance with continuous monitoring. The answer is to evaluate the existing profusion of security technology, eliminate ineffective products, and consolidate where possible. The key to making these decisions is to monitor and measure, and the solutions that provide that capability will also give visibility to agencies, allowing them to fulfill a large part of the obligation toward Continuous Monitoring.
Government decision makers recognize this and asked during the executive meetings whether SIEM can replace some of the existing security technology. There seems to be some confusion as to what SIEM is and what it can do, as many of the roundtable attendees were there to get an orientation on the capabilities of QRadar and Security Intelligence. Some agencies don’t have SIEM at all, some have basic log management solutions, and others have first generation SIEMs that simply have not lived up to the promises made at purchase. The results were positive, the proof being that Q1 Labs/IBM was nominated as the “Best Continuous Monitoring Round Table” award. It’s gratifying to be validated from the members of GTRA, some of the most strategic and advanced leaders in federal government.
In the final analysis, the agreement about how the public and private sectors can collaborate to improve efficiency and security is to let the government work on integrating agencies and let industry work on integrating technology. Because there is a wide range of requirements in both the private and public sectors, the solutions must be flexible enough to adapt to diverse processes. Q1 Labs has been in the business of continuous monitoring for almost a decade–long before the government initiative. And now, with the entire IBM Security Systems portfolio, we have the most comprehensive security offering, integrated to reduce the total cost of ownership.
We look forward to our continued relationship with GTRA and evolving our security solutions to meet the needs of both the private and public sector, combining the research and development resources of IBM and the feedback of the entire GTRA Council.
Rich Mogull of Securosis recently wrote a blog entry called “Can You Stop a Targeted Attack?” that nicely complements a Dark Reading article and accompanying report by his colleague, Adrian Lane, entitled “15 Ways to Get More Value from Security Log and Event Data.”
After (justifiably) lamenting that many “vendors have been APT-washing their stuff trying to convince anyone who would sit still that their run-of-the-mill IPS or endpoint protection product” could stop APT attacks with “with fairy dust and assorted other black magic,” Rich goes on to ask some interesting questions.
- How many of the adversaries facing organizations today are advanced or persistent? Probably very few, since most of them are “today’s version of script kiddies trying to smash and grab their way out of the despondency of their existence” by stealing your organization’s customer details and payment card information. (I would add that it’s not just script kiddies but also organized gangs of cyber-criminals, operating out of eastern Europe and other exotic locations, preying on both large and small businesses who don’t have even the most basic security controls.)
- Are existing controls such as perimeter defenses sufficient? Answer No (but existing controls still have a role to play).
- Do targeted attacks exist? Absolutely (the Aurora attack on Google being just one example).
- Are new technologies emerging to help prevent targeted attacks? Yes — Rich writes that “lots of vendors are learning and evolving their offerings to factor in this new class of attacker.”
- How can next-generation SIEM and security intelligence help? Rich doesn’t use these specific terms in his blog but writes that “Regardless of what happens on the prevention side, you still need to monitor the hell out of your stuff … it’s career-limiting to plan on stopping [targeted attacks]” so you should still invest in “monitoring, forensics, and response – even in the presence of new and innovative protections.” He mentions Global Payments as an example of an organization that discovered they had been breached by monitoring their egress traffic and “seeing stuff they didn’t like leaving their network” (one of the capabilities provided by QRadar); and yes, they didn’t stop the breach “but it’s a hell of a lot better to catch it yourself than to hear from your payment processor or the FBI that you have a ‘problem’”. Gartner analyst Mark Nicolett made a similar observation in “Using SIEM for Targeted Attack Detection” [complementary download] when he wrote that “Organizations are failing at early breach detection, with more than 85% of breaches undetected by the breached organization.”
In Adrian’s Dark Reading article, he writes that “we are drowning in [security] data but are thirsty for actionable information.” And in the full report from Dark Reading’s Security Monitoring Tech Center, he writes that by deploying SIEM with “automation and resources, along with a healthy dose of human intervention and insight, organizations can make their data work for them, instead of the other way around.”
Adrian also writes that SIEM “technologies are being used not just to analyze data after the fact, but also to perform real-time detection quickly followed by meaningful forensic examination of events.”
By the way — does this sound like Big Data? Of course it does — but we’re talking about purpose-built Big Data analytics that were designed specifically for security — not just a generic Big Data repository with a bunch of scripting tools. QRadar has always been built on a Big Data architecture — distributed, parallel, elastic and indexed — but it’s the applications built on top of this architecture that help you find the proverbial needle in the haystack via automated intelligence.
One of the ways that the QRadar Security Intelligence Platform helps you increase the signal-to-noise ratio is via its embedded expert security knowledge, based on nearly 10 years of real-world experience, including: hundreds of pre-configured correlation rules; 1,500+ security/compliance reports; built-in support for 400+ data sources, including parsing and normalization; and native support for the collection of network flow traffic (via deep packet inspection), which can then be used for behavioral analysis and anomaly detection in combination with information from log sources.
As Adrian Lane writes in the Dark Reading report, “Enterprises are swimming in the sea of data generated by networks, servers, personal computing devices and applications … Just as the bad guys adjust their attacks to take advantage of new vulnerabilities or to tune malware to evade detection, security professionals must continue to adapt. Sitting still means failure. Ultimately, these log files are your view into what’s going on, and it’s your job to figure out what’s important and how to get that information with as little work as possible.”
And hopefully we can help make your job easier – unlike first-generation SIEMs that are complex and require armies of people (in-house staff and/or contractors) to deploy and operate. Gartner says that QRadar is “is relatively straightforward to deploy and maintain across a wide range of deployment scales” while Jerry Walters, Director of Information Security at Ohio Health, says in his YouTube interview that “QRadar gives us the visibility to find the virtual needle in the haystack when it comes to discovering what happened and when, and to proactively prevent things that are potentially going to be problems.”
 Critical Capabilities for Security Information and Event Management, Gartner, 21 May 2012
Government agencies, like their private sector brethren, are knee deep in IT security challenges, threats, and regulations. While that’s not much of a shock, this might be – according to the Government Accountability Office, the number of reported security incidents increased by over 650 percent during fiscal years 2006–2010. At the same time, government agencies have widespread deficiencies in security controls, leading to vulnerabilities undetected breaches, and insider fraud.
To help meet these challenges, the federal government is implementing a risk-based IT security strategy based on deploying enterprise continuous monitoring solutions. These solutions will continually assess the actual security state of agencies’ IT networks and systems, while providing scoring information that managers can use to prioritize actions needed to reduce risk and improve their security grades. Continuous monitoring will enable agencies to determine their own security health and compare it to other agencies. Scoring will also allow the different lines of business within an agency to more effectively work together, while enabling agencies to gain the same operating efficiencies from IT investments that Fortune 500 companies have realized.
Recently, along with our friends at 1105 Media and partner Accuvant, we discussed the importance of continuous monitoring and related steps agencies should take while approaching it. Security intelligence plays a critical role in achieving continuous monitoring because of its ability to centralize information into a single console from various data sources.
Most importantly, we talked about how many government agencies are successfully addressing previously disparate functions — including SIEM, risk management, log management, and network behavior analytics — into a total security intelligence solution that fits the constrained budgets and resources of government agencies. The QRadar Security Intelligence Platform enables our customers to leverage existing assets, stabilize budgets, and easily comply with new mandates while maintaining a proactive stance on risk management and security.
If you missed the webinar, or just want to revisit it, watch the whole thing HERE. For a deeper look at how security intelligence helps federal agencies adopt a continuous monitoring security program without requiring additional resources, download this white paper.
Last week I participated in a panel on Continuous Monitoring at FOSE. Joining me were Mark Crouter from MITRE as the moderator, John “Rick” Walsh, chief of technology and business processes in the Cybersecurity Directorate of the Army’s Office of the CIO, and Angela Orebaugh, Fellow and Senior Associate at Booz Allen Hamilton. Auspicious company indeed.
For those not tuned into the federal government’s cybersecurity initiatives, the concept of continuous monitoring evolved from the previous approach in FISMA (federal information security management act), which mandated annual reviews of federal agencies’ security programs. After a few years of implementation it was widely recognized that the reviews generated rooms full of paper, which were obsolete as soon as they were printed, but didn’t elevate information security plan effectiveness to an acceptable level. Between 2006 and 2010, the number of security incidents rose by over 650%. The resulting strategy is embodied in FISMA 2012 (2.0), which is aimed at continuous monitoring of security controls, determining gaps between current and accepted security baselines, and quantifying risk.
Rick has been facing the challenges of implementing continuous monitoring within the government, and his experience has been that the different business processes, missions, and systems create obstacles, but once overcome, the solution yields financial and process efficiencies, and improved security. One of the biggest challenges is enumerating the assets, but once done is sure to reveal duplication of systems and opportunities to consolidate systems and software licensing.
Angela framed the conversation in her intro, which was appropriate since she co-authored NIST Special Publication 800-137, Information Security Continuous Monitoring for Federal Information Systems and Organizations. She has also been involved with the Security Content Automation Protocols (SCAP, pronounced ess-cap) project, which provides a set of standards for describing vulnerabilities (CVE, common vulnerabilities & exposures), systems (CPE, common platform enumeration), and configuration standards (CCE, common configuration enumeration), as well as a scoring system (CVSS), a test definition language (XCCDF), and a vulnerability definition language (OVAL). Angela advocated use of SCAP as a foundation for continuous monitoring.
Questions from the audience mainly focused on how to implement continuous monitoring, including getting buy-off from senior management and budgeting. The key is to show short-term results that are meaningful to business stakeholders. While continuous monitoring is in the process of being mandated, the danger is treating it as a checklist and doing the bare minimum to comply; whereas, when done right continuous monitoring can be the cornerstone for real security improvements, including interrupting the kill chain through early attack detection, provide total visibility to include troubleshooting operational problems, and give management a security dashboard with both technical and business gauges. The State Department was one of the first successful adopters of continuous monitoring and was able to not only ameliorate their high-risk vulnerabilities by 90%, but also slash the cost of certification and accreditation by 62%.
One of the more amorphous questions was how continuous is continuous? Does data need to be analyzed in real-time or near real-time? Does this apply to all systems? The answer is that it depends on each individual agency’s goals and the telemetry that can be collected from the systems. Organizations don’t want to have to retool systems to provide events as they occur–unless the systems are critical enough to warrant that cost and effort and there is no other way to gain the needed visibility. The panel all agreed that some systems only need to report into a central monitoring solution on an occasional basis–vulnerability scanners, for example–while network monitoring should report in near real-time, which means in one-minute intervals for most systems that create NetFlow records. Ultimately, there is no one-size-fits-all answer.
My overall impression from the panel is that continuous monitoring to the federal sector is what we call Security Intelligence in private industry, and both need to be defined and implemented per the enterprise or agency’s specific needs. The primary difference is that continuous monitoring is focused on metrics: quantifying the delta between expected state of assets and the measured states and classifying these differences as vulnerabilities. The scorecard approach provides a common baseline for different organizations to compare themselves against each other, and for management to better understand their organizational security posture at any given moment in time and compare it against past performance.
I was asked at the GTRA conference how the public and private sectors differ. My view is that the government does more up-front analysis and planning, while the private sector sees a need and builds a solution. Between well-considered frameworks, like FISMA 2.0, and tools like QRadar and OpenPages, the federal government and industry have an opportunity to collaborate on a complete Security Intelligence solution incorporating continuous monitoring and meaningful security scorecards and dashboards.
Click here to learn how Security Intelligence can help Federal organizations address continuous monitoring requirements. Find out how QRadar Risk Manager addresses the need for configuration auditing, and assessing the risk of configuration changes, across multi-vendor network environments (switches, routers, firewalls and IDS/IPS).
According to a recent report in the Wall Street Journal, a group of hackers in China broke into the U.S. Chamber of Commerce’s network around November 2009 and were not discovered until more than a year later.
The hackers likely used a spearphishing attack to install spyware on end-user machines. The spyware was used to steal employee administrative credentials, which were then used to install about a half dozen back doors which communicated with computers in China every week or two.
The hackers stole sensitive Chamber data such as trade-policy documents, meeting notes, trip reports and schedules, and emails containing the names of companies and individuals in contact with the Chamber. They even used their own search tools to locate documents containing keywords related to financial and budget information, and stole all emails from four targeted employees – who worked on Asia policy – for approximately six weeks during one portion of the attack.
And here’s an interesting twist — a thermostat at a Chamber town house on Capitol Hill was communicating with an Internet address in China, and a printer spontaneously started printing pages with Chinese characters.
The Chamber represents the interests of U.S. companies in Washington and its members include most of the nation’s largest corporations. As a result of this incident, the organization’s COO concluded that “It’s nearly impossible to keep people out. The best thing you can do is have something that tells you when they get in. It’s the new normal. I expect this to continue for the foreseeable future. I expect to be surprised again.”
So how can next-generation SIEM and Security Intelligence help?
First, we should acknowledge that even strict adherence to some compliance mandates, such as PCI-DSS and HIPAA/HITECH, won’t usually protect intellectual property (IP) such as strategic plans, product designs and proprietary algorithms. Of course, broader compliance frameworks such as ISO 27001/27002, and NIST 800-53 – as well as recent SEC guidance regarding cybersecurity risks and disclosure – will definitely help tighten controls and improve the overall security posture of your infrastructure by requiring centralized monitoring and other best practices, along with helping to address minimum “standards of due care” expectations of your board of directors, customers and shareholders.
Next-generation SIEM can certainly help in reducing the cost and effort of compliance – by centralizing and automating compliance reporting and efficiently addressing log retention requirements – but it also provides significant added value by helping to proactively detect attacks such as this one.
Second, the fact that the hackers were in the network for more than a year before being detected is not unusual. According to the 2011 Data Breach Investigations Report, more than 60% of breaches remain undiscovered for a period of months or longer (versus days or weeks). And according to Kim Peretti, former senior counsel at the U.S. Department of Justice, “Our most formidable challenge is getting companies to detect they have been compromised.”
Why? Because most organizations still rely on basic server and device logs which are widely dispersed across their infrastructures – combined with manual, after-the-fact log analysis – making it virtually impossible to detect any intruder alarms because the information simply gets lost in the noise.
Continuous real-time monitoring of all network and system activity – combined with real-time event correlation and automated behavior profiling – can help by rapidly identifying anomalous or out-of-policy events such as:
- A server (or thermostat) communicating with an IP address in China.
- An unusual Windows service starting up, such as a backdoor or spyware program.
- A spike in network traffic and/or data server activity, such as a high volume of downloads from a SharePoint server during off-hours.
- A high number of failed logins to critical servers, which can indicate a brute-force password attack.
- A configuration change, such as an unauthorized port being enabled.
- An inappropriate use of protocols and applications, such as sensitive data being exfiltrated via P2P or social media applications; in this case, detection requires application-aware (Layer 7) monitoring with flow analysis and deep examination of packet content.
More information on how organizations can leverage a unified architecture to reduce risk with continuous, real-time monitoring, can be found in this white paper, “Countering Advanced Threats.”