Posts Tagged ‘Cyber Security’
As the news broke that the final trilogy of Star Wars was going to be made, I was excited and intrigued about the plot. However, one question I always ask myself is, “How different would the story have been if the Deathstar were more secure?”
Along with most Star Wars fans, the moment when the rebel alliance flew in on mass to destroy the Deathstar was one of great intrigue. With a power so great and protection around the entire perimeter of the battlestation, how could it ever be penetrated?
Of course the hero, Luke Skywalker, comes to save the day by finding a small gap and, undetected, he flies through to the center of the Deathstar, destroying it and escaping without a single scratch.
When comparing this scenario to what we see everyday in the news regarding cyber attacks, it is very similar- right down to the part where organizations react to the breach far too late. It is of utmost importance for organizations to make sure they are able to see and react instantly when a security breach is happening, no matter how small. As we see with the case of the Deathstar, it only takes one opening for an attacker to slip in and cause a tremendous amount of damage. We only have to see this in the news, where an attacker describes how he stole a database of 150,000 contacts using a SQL injection (more details) without any reaction.
Having a thorough Security Intelligence strategy in place, with a next generation SIEM as the center piece, is vital for an organization. With the advantage of real-time normalization and correlation across your network, any abnormal behavior will be highlighted and notified immediately to your security team, detailing where, when, how, what and why about the attack.
It is just my opinion, but if the Deathstar had an anomaly detection system to highlight immediately when enemies were within its network, Darth Vader would have had a much easier life…. “May the Force be with you”.
To learn more about securing your own “Deathstar,” watch this Dark Reading webcast featuring end user Richard Webster, Senior Manager of Security at Sanofi, and Michael Applebaum, Director of Product Marketing at Q1 Labs, an IBM Company. In it, they discuss real-world lessons about applying Security Intelligence and next-generation SIEM for threat protection.
With the release of QRadar Security Intelligence Platform 7.1, we’re excited to share with you a host of new advances to our family of Security Intelligence products – including QRadar SIEM, QRadar Log Manager and QRadar Risk Manager. These innovations are making it easier for users to leverage cloud investments, simplify management, collect and manage data more flexibly, and replicate or extend QRadar deployments. As a result, QRadar users will receive even greater insight and visibility, further reduce manual work and gain higher system performance. Let’s dive in!
Leverage Cloud Investments
We know many of you have built significant private and public cloud infrastructures and are looking for new virtual workloads to deploy in the cloud. With QRadar 7.1 you now have an additional type of appliance – the Event Collector – that you can deploy virtually, providing more ways to use your cloud environment to gain richer security intelligence.
Event collectors – which come in both virtual and hardware appliance form – provide continuous event logging capabilities, even when network connectivity is unreliable. They collect event logs and forward them to an event processor or all-in-one appliance for correlation, analysis and long-term storage. If network connectivity is lost, they can queue events in a storage buffer and then forward them upon re-connecting. (We call this “store and forward.”) In addition to serving locations with intermittent network connections (like naval vessels), event collectors are well-suited for collecting logs in distributed locations with low to moderate event volumes, such as retail stores and satellite offices. A large retailer, for example, might have hundreds of stores in which they want to collect event data, but the data generated in each location is modest enough that event processors (with terabytes of storage per appliance) aren’t required.
With this release, you now have access to a full complement of virtual appliances – console & all-in-one, event processor, flow processor, VFlow collector, and event collector – to best utilize your current and future cloud infrastructures. Even better, appliances can be mixed and matched among virtual appliance, hardware appliance and traditional software form factors, to meet your specific needs.
Simplify Management – Especially for Big Data
As we and others like Scott Crawford and Jon Oltsik have written, information security is truly a big data analytics challenge today. With its heritage in network flow collection and anomaly detection, QRadar has been collecting and correlating massive data sets in real-time since before big data became a white-hot phenomenon. Critical infrastructure and tier-one telecommunications providers, banks, and energy and utility companies are using QRadar to correlate as many as one million events per second (EPS) in real-time, thanks to QRadar’s purpose-built, embedded Ariel database. But with such massive data volumes come management challenges.
In response, we developed new Index Management capabilities in QRadar 7.1 that provide more refined data management and ultimately better performance. As the volume of stored data explodes, challenges inherent in querying big data become more pronounced – and so do the benefits of optimizing indexes for the queries most often run. QRadar’s default search indexes have always followed the 80/20 rule, providing out-of-the-box indexing for the most commonly used properties. Now we’re taking indexing a step further, enabling deep customization and tuning.
With QRadar 7.1, users have granular control over the creation of search indexes that enable speedy querying. While the fixed database indexing configuration that QRadar has historically provided works well for most scenarios, some clients would benefit from additional or different indexes. That’s why we added the ability to customize the indexing scheme for the event and flow database – so users can drop existing indexes to free up system resources or create new indexes to optimize the system for their specific needs.
QRadar also provides invaluable visibility into the use of indexes – with statistical reporting on the frequency of searches involving each property, how often each property’s index is used, and the size of each index – to help inform indexing decisions. This enables more efficient storage utilization and superior search performance.
Do you suspect one property is getting searched a lot? Get the data.
Do you wonder how big an index has grown? Find out.
Want to start indexing a custom property and see how often that index is used? No problem.
Another new capability that simplifies management is QRadar Risk Manager’s Enhanced Policy Monitoring. Risk Manager excels at monitoring network configurations and system vulnerabilities for potential security and compliance violations, and has always alerted when a policy is violated. Now it takes monitoring a step further with the ability to automatically notify when a policy is passed, providing positive evidence of compliance with external regulations and internal corporate policies. For example, you might want a positive notification when the percent of regulatory assets with Internet exposure vulnerabilities is within policy, or when the percent of regulatory assets with client side vulnerabilities that have communicated with the Internet is within policy. Now you can gain affirmative proof of such compliance.
Collect and Manage Data More Flexibly
QRadar 7.1 also offers new capabilities for collecting and managing data with greater flexibility. These include WinCollect – a versatile and scalable new QRadar capability for Windows event collection. WinCollect provides a superior and agentless means for collecting events from large numbers of systems. Installed on a Windows server of the customer’s choice, WinCollect can use the Windows Event Log API to pull events from target systems and then forward them to QRadar, or use Windows event forwarding and allow target systems to automatically push events to it and then forward them to QRadar. WinCollect complements existing collection mechanisms, including Q1 Labs’ own ALE solution, third-party approaches, and native Windows Server capabilities. In a subsequent blog post, we’ll explain the advantages of each approach and the value of having a broad set of choices.
Event collectors (described earlier) also help simplify data collection and management, in addition to leveraging cloud infrastructure and enabling event collection under unreliable connectivity. To begin with, their ability to “store and forward” data not only applies when a network connection is lost; it can also be used proactively for policy-based event forwarding. In some cases, a remote location might have reliable but limited network bandwidth, and you might want to limit the collector’s use of bandwidth to specific (less busy) times. With QRadar 7.1, you can limit forwarding by bandwidth utilization (e.g., never consume >1MB/second), and/or set an hourly, daily or weekly forwarding schedule. In addition, event collectors can filter event data before it is forwarded for correlation, reporting and long-term storage.
Additionally, we have released more than a dozen new product integrations (device support modules) that enable users to normalize and analyze even more types of security telemetry. These include IBM Security zSecure Audit, which allows sending z/OS, RACF, ACF2, Top Secret, DB2, and CICS events from the System Management Facilities (SMF) log to QRadar (in addition to the native z/OS logs that QRadar already collects). We have also completed integrations with many third-party products, such as Verdasys Digital Guardian, AppSecInc DbProtect and Trend Micro Deep Discovery.
Build Extended Solutions and Replicate Existing Deployments
Lastly, we are enabling clients to build extended security intelligence solutions and replicate existing deployments. With Security Intelligence Content Importing/Exporting, you can export correlation rules, building blocks, reference sets, report templates, dashboard widgets and more from a QRadar system to an external device, and subsequently import them into another QRadar system. This enables quick deployment of a new QRadar system based on an existing system or template, as well as sharing of security intelligence content across systems.
We see this being used in several ways:
- Enabling clients to copy custom-built security intelligence content from one deployment to another (across business units or geographies)
- Enabling clients to copy content from a development or test environment to a production system
- Enabling solution providers and system integrators to build unique Security Intelligence intellectual property that they can distribute to their customers.
While QRadar already delivers thousands of rules, report templates, dashboard widgets and saved searches out-of-the-box, many business partners have additional expertise to offer to clients, and have been eagerly awaiting this capability.
To Learn More
With this hefty release completed, we’re gearing up to bring some fantastic new innovations to market in 2013. In the meantime, please try QRadar 7.1 for yourself and let us know what you think. We also encourage you to learn about the other IBM Security product releases just announced, which include capabilities for securing big data environments (including IBM InfoSphere BigInsights and Cloudera), risk-based access control for mobile users in BYOD environments, and privileged identity management.
To read more about using SIEM for targeted attack detection (APT’s), you can also download this Gartner report. Or see how organizations are using network flow analytics for better threat detection and network visibility with this Q1 Labs paper. Best wishes in your security journey!
Late last month, the IBM X-Force team released their mid-year trend and risk report. This 100+ document includes research on the latest attack trends, risks and threats, and is full of tips on how to avoid them and keep your organization safe. For those of you who haven’t had time to read the full report, one highlight that I’d like to point out is the contribution from Michael Applebaum, product marketing director at Q1 Labs. He contributed a terrific write up on using security intelligence for advanced threat protection, which also includes a list of best practices for anomaly detection. If you have a few minutes, check out this post Michael wrote for the IBM Software blog- and if you are interested in reading the full IBM X-Force report, download it here.
Excerpt from the IBM Software Blog:
Not every security breach is the result of an advanced persistent threat (APT). In fact, only a small fraction probably are.
But the industry is buzzing about APT’s today because the business impact of an APT can be massive. Victims of these attacks are keenly targeted, and a successful breach can expose customer data, financial data, intellectual property and other information assets. Recovering from this kind of attack can be a costly and long term challenge, since trust takes years to build, but moments to destroy. Regaining the confidence of customers and other stakeholders is inevitably the most difficult part of recovering.Perhaps surprisingly, APT targets aren’t always Fortune 500 corporations and government agencies. It was reported that one long-running APTcompromised real estate firms, construction companies and even a national Olympic committee. The lesson is that any organization with information of value to others is a potential target...
Read his full post for answers to questions like “Do I really need to worry about an APT attack?” and “How does security intelligence work?”.
The CIA has The Farm, a secret facility somewhere in Virginia, where it trains agents in wiretapping, interrogation, and handling human “assets”. Similarly, the GTRA (Government Technology Research Alliance) convenes in remote Bedford Springs, Pennsylvania, roughly halfway between DC and Pittsburgh, in a hotel that looks like The Overlook from The Shining. Instead of how to poison an enemy operative, though, the federal delegates discuss cyber-security and collaboration between the government and industry.
I spent Sunday through Tuesday a couple of weeks ago exchanging ideas with the best and brightest in the public sector at roundtable meetings, on a panel entitled “How to Drive Efficiency and Improve Security“, and mingling in between sessions and at the Havana Nights after-hours soiree. Top of mind concerns echo those in the private sector, including secure mobile device and cloud strategies, and doing more with less. Federal agencies are also concerned with Continuous Monitoring, an initiative I’ve written about in the past here. While the private sector doesn’t have to comply with a government regulation mandating yet another set of security controls, the end of the government’s fiscal year is fast approaching, heralding the need for meeting compliance deadlines, and security managers are looking for answers on how to meet the deadline.
According to NIST SP 800-137, “Information security continuous monitoring (ISCM) is defined as maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions.” This is close to the definition of Security Intelligence, which provides actionable and comprehensive insight for managing risks and threats, from protection and detection through remediation. Core to continuous monitoring is centralized event management, situational awareness—aka, context—and analytics, to reduce the onslaught of data into discrete, manageable, and actionable actions.
Many of the GTRA delegates are trying to reconcile the ambiguity in the continuous monitoring guidance and the confusing array of solutions offered by the security technology industry. Within SP 800-137, the terms “continuous” and “ongoing” are not prescriptive; instead, they are defined to “mean that security controls and organizational risks are assessed and analyzed at a frequency sufficient to support risk-based security decisions to adequately protect organization information. Data collection, no matter how frequent, is performed at discrete intervals.” Once organizations come to grips with what the terms mean to them, what needs to be monitored? Just logs from security technology like firewalls and IPSes? How about network activity? And where do they get data about external threats to add situational awareness?
The advice that I give is that it all starts with a strategy. Don’t create your security posture around the 800-137 controls; map them to your mission objectives and the security initiatives that support them. A strong security posture will often end up with total coverage and incremental security goals. With this roadmap in hand, you can start planning and organizing activities, and get started. Remember, an effective security program is constantly evolving, so the end state is not final; you don’t have to get it perfect the first go-round. And if you don’t take the first step, it’s guaranteed that you won’t succeed in complying with the Continuous Monitoring mandate.
The same is true in the private sector, whether you’re subject to government regulations like SOX or contractual obligations like PCI DSS. In many cases organizations are subject to multiple compliance mandates, and many of them have overlapping controls. Map them to each other and the union of all controls should map to organizational goals and security initiatives. As you meet the controls that intersect, you’ll quickly start to fulfill the obligations of many compliance mandates at the same time.
Even with a solid plan, government agencies are struggling with how to become or stay secure with in an increasingly complex threat landscape, with less budget and resources. The panel was asked how private industry is helping to stretch federal budgets while at the same time improving security. My view, particularly after talking with security managers, CISOs, and CIOs in government agencies, is that the complexity of existing security solutions, comprising dozens of technologies from as many vendors, is both expensive to purchase and maintain, is not effective at stopping determined attackers, and is confusing the means to achieve compliance with continuous monitoring. The answer is to evaluate the existing profusion of security technology, eliminate ineffective products, and consolidate where possible. The key to making these decisions is to monitor and measure, and the solutions that provide that capability will also give visibility to agencies, allowing them to fulfill a large part of the obligation toward Continuous Monitoring.
Government decision makers recognize this and asked during the executive meetings whether SIEM can replace some of the existing security technology. There seems to be some confusion as to what SIEM is and what it can do, as many of the roundtable attendees were there to get an orientation on the capabilities of QRadar and Security Intelligence. Some agencies don’t have SIEM at all, some have basic log management solutions, and others have first generation SIEMs that simply have not lived up to the promises made at purchase. The results were positive, the proof being that Q1 Labs/IBM was nominated as the “Best Continuous Monitoring Round Table” award. It’s gratifying to be validated from the members of GTRA, some of the most strategic and advanced leaders in federal government.
In the final analysis, the agreement about how the public and private sectors can collaborate to improve efficiency and security is to let the government work on integrating agencies and let industry work on integrating technology. Because there is a wide range of requirements in both the private and public sectors, the solutions must be flexible enough to adapt to diverse processes. Q1 Labs has been in the business of continuous monitoring for almost a decade–long before the government initiative. And now, with the entire IBM Security Systems portfolio, we have the most comprehensive security offering, integrated to reduce the total cost of ownership.
We look forward to our continued relationship with GTRA and evolving our security solutions to meet the needs of both the private and public sector, combining the research and development resources of IBM and the feedback of the entire GTRA Council.
Allan Paller of the SANS Institute had a few interesting things to say at the ISSA-LA’s Security Summit IV, but two struck me as incredibly salient. The first is that CEOs actually do understand the importance of information security. I’ve heard security experts–smart and well-respected ones–utter that executive management doesn’t “grok” security. That’s true, but they don’t need to grok it; that’s the responsibility of us who inhabit the world of zero-days and hacktivists and APTs. CEOs need us to analyze and summarize our knowledge and present it to them in a business context. The problem isn’t just that we in security generally don’t speak the language of the boardroom, we simply aren’t wired the same. Security practitioners are a risk-averse group, by and large; CEOs are risk managers.
Which makes sense: CEOs are responsible for growing the business and there’s no reward without risk—hopefully well-calculated risk. We don’t want our executives pumping tokens into slot machines in Vegas hoping to hit it big. On the other hand, we don’t want them stuffing the cash from revenues into their mattresses. So when they decide to invest in new market opportunities or augment the current business model using technology, they want to be on the safe side of the risk threshold—but just barely.
But security folks’ impulse is to grab the business stakeholders by the shirt collars and drag them away from that scary precipice. We’re much like lawyers in that way. Their job is to minimize liability, a form of risk, optimally to eliminate it with the fabled iron-clad contract. Of course with lawyers it’s as much a negotiation tactic as dogma; each party stands on opposite sides of an issue with backs to their own walls, fully knowing they’ll both end up somewhere in the middle.
But security is not at odds with the business; it’s not a negotiation between the two parties. Our job is to determine appropriate responses and come to the table with the best, most informed decision possible with the given data. We need to find a happy middle between a purist security stance that discourages new initiatives (e.g., cloud, BYOD, partner portals, etc.), and a Wild West approach where the business does whatever it wants without addressing risk — and present that to executive management. They need to trust that we understand the business and are helping them to make the right risk management decision. Remember, “defend” is not the only response to a threat; other mitigating controls include transferring risk and accepting it.
Alan also said that CEOs want to know “how much is enough.” This is the heart of the matter. Finding the center of gravity that lets the business grow and thrive is the key to transforming the perception of information security from a cabal of naysayers to trusted risk analysts and business enablers.