In Gartner’s latest report, “Tools for Network-Aware Firewall Policy Assessment and Operational Support,” the analyst firm discussed the value that solutions like QRadar Risk Manager provide to network/security operations, including network/security policy assessment, Risk, and Compliance. It covers, in great depth, the value of solutions in this space to network and security teams and an overview of features provided by QRadar Risk Manager and other competitive products.
So what’s so interesting about this report? It is significant that Q1 Labs is the only SIEM vendor in this report, and that we are perceived as leading the charge in this space amongst our SIEM competitors.
The report is also important because it provides great insight into the value of products like QRadar Risk Manager to network and security teams to improve the overall security posture through improved configuration and vulnerability visibility.
A few snippets worth highlighting:
- “Only Q1 Labs incorporates routing flow data in its analysis of rule efficacy and network behavior over time.” – This is an extremely important differentiation amongst vendors in the field because organizations that leverage configuration data alone will often miss situations where a configuration is thought to be adequate but for some reason still allows potentially risky network traffic to propagate.
- “Tools that perform firewall policy assessment and related operational support functions, within the context of the networks connectivity and security zones, provide substantial benefits to security operations” – this is directly aligned with QRadar Risk Manager’s capabilities and core to its ability to assess and monitor configuration and compliance policies.
- “Q1 Labs […] integrate[s] with a range of third party vulnerability scanning products and leverage knowledge of topology and reachability to prioritize specific systems and vulnerabilities for remediation” – this capability is one of the key value propositions of the QRadar Risk Manager solution – the ability to minimize false positives common amongst vulnerability scanners and features that allow security teams to focus in on the highest risk vulnerabilities – those that can be easily exposed because of the way the network is configured.
- “Tools that enable rule changes for access requests to be simulated before implementation contribute to fewer service availability outages” – the capability of QRadar Risk Manager to simulate network change prior to implementation can greatly minimize the risk of the changes during operational deployment
In my opinion – Gartner has done the industry a great service in publishing this report – it is one of the few reports I’ve seen that helps organizations understand the value of products like QRadar Risk Manager. I spend the majority of my days working with organizations that are trying to be as proactive as possible getting ahead of the network “threat” curve. There are numerous industry reports that have determined that the primary reasons organizations are successfully breached are that they are either (1) poorly configured and/or (2) have not adequately addressed exposed vulnerabilities. Helping in these two areas is where solutions like QRadar Risk Manager are focused.
The term Advanced Persistent Threat (APT) has seen increasing usage in information security circles, and for good reason. The term refers to a much more sophisticated, determined, and patient type of opponent in the game of information security than what we’ve become accustomed to. An appropriate (and perhaps accurate) metaphor is that these are the generation of script kiddies who spent their time defacing websites in the late ‘90s, and having now grown up, are interested in employing their skills for nothing but financial gain.
APTs represent a unique type of challenge, and distinguish themselves from your household variety of security threats in a number of ways:
- A high degree of sophistication. The responsible parties behind APTs are generally organized crime and state-sponsored cyber-warfare groups. These groups are well-funded, highly organized, and tend to have significant resources at their disposal.
- Deliberate and targeted. Rather than engaging in indiscriminate drive-by shooting tactics, casting wide nets and trawling the Internet for vulnerable systems, these groups tend to only pursue carefully selected targets.
- A high degree of patience. Subtle, persistent, and inconspicuous is the name of the game – no banging on the front door or tromping noisily through your network. The objective is to silently infiltrate and sometimes even maintain a long-term presence within the target, ultimately to carry out the objective of the attack.
With APTs becoming more prevalent, there seems to be a growing consensus that traditional security tools and techniques fall short of addressing the problem. The general message here isn’t terribly new: reliance on checkbox-style compliance with industry security and control standards, and deployment of perimeter defenses and signature-based threat detection, may enable an organization to detect and deflect the bulk of “dumb” activity. But managing the risk posed by APTs requires a correspondingly more sophisticated approach, in conjunction with a more sophisticated set of tools.
Rather than employing an approach of “let’s look for all individual known bad things”, a more suitable tactic might be “let’s look for things that don’t jive with our operational profile”. This clearly speaks to an anomaly detection capability, both at an application and a network level. But beyond that, there is a need for a higher degree of overall security intelligence.
Security intelligence can be a somewhat difficult concept to pin down, but it might be best described as the ability to derive actionable information from the sum of ALL security data available to an organization, placed in context of relative importance, rather than the narrow compartmentalized view currently employed by individual, silo’d components of the traditional security toolset.
Posted by Iven Connary in Cybersecurity
So I recently traded my Blackberry for a slick new Droid based smart phone. All in all I must say I’m pretty pleased. What I really love about my phone is my Slingplayer app – providing full cable TV right to my phone… very cool! My random security thought of the day – could the Slingbox architecture become a model for better securing data on a Smartphone?
As the saying goes – “everything old is new again”. There is no doubt in my mind, with the continued adoption of virtual technology, that corporations around the world will again turn to a centralized model of control, but perhaps this time with a new twist – virtualized desktops (AKA a private cloud). I’m actually looking forward to the day when I have an uber-powerful virtual PC within my companies “cloud” that hosts all my applications. A virtual PC that I can access no matter where I am and no matter what device I’m on, including my latest smartphone. Interestingly enough – that seems to be what I can do now with my Slingbox and Slingviewer. While I’m not saying my Slingbox is a 100% secure application, it does seem like it could become secure quite easily. How easily? Well, if the connection between my phone and that magic little Slingbox, sitting next to my cable box, was fully encrypted point-to-point..it would be a pretty secure application. This architecture feels eerily similar to having a centralized virtual PC with a fully encrypted point-to-point channel. I can see the network and security teams salivating over this one for many reasons, the most compelling of which is a regained level of control over the security of enterprise data.
There’s a whole lot to like about this centralized client “cloud” architecture. First, it will greatly minimize the sprawl of confidential data, thereby reducing the risk of breach. Second, it will dramatically improve the ability to securely store, backup and restore data. And finally, it will provide organizations the ability to better monitor access to enterprise systems and data. For more information on this last benefit – see my previous blog post that talks about tricks and tools for better securing a virtual infrastructure.
It’s a reality that smart phones and other mobile internet gadgetry are here to stay. Organizations need to be quick to adapt to the mobile world, and there’s little doubt in my mind that a centralized client side “cloud” is one technology that will become widely adopted. The security benefits are too big to ignore. Perhaps it’s a big leap to think the architecture of my beloved Slingbox will solve all the corporate information security woes, but it feels like its heading in the right direction.
I just finished reading an article today titled “The Cloud’s Impact on Security” . I found the article enjoyable because it provides an insightful and succinct explanation of the often vague concept of “the cloud”. It also highlights numerous security challenges facing organizations that are shifting to emerging cloud services and technologies. What I did find lacking in this article was any guidance or suggestions for addressing the security challenges posed. With that omission in mind, I thought it might be helpful to blog about a few proven security best practices and technologies that can easily be applied to cloud deployments, designed to improve the overall security of data residing in the cloud.
One important concept presented in the aforementioned article is that a “cloud” is typically built from a wide range of solutions including “hosted services using shared, co-located or multi-tenant resources” – a public cloud. In addition, the article mentions that “vendors are using the word [cloud] when speaking about using internal IT resources in highly virtualized, dynamic pools” – private cloud”. This is important to understand because each deployment model will introduce different security challenges.
When finding a solution security best practices are fundamental, and technology is your friend. In a brief blog entry such as this it’s hard to present solutions to all security concerns when implementing cloud deployments. However, there are a few key challenges presented in the above mentioned article that are discussed below.
Meeting Emerging Regulatory Challenges
There is no doubt that new regulations will continue to emerge and existing regulations will evolve to better ensure the protection of data in the cloud. In fact, just last week Twitter settled charges with the Federal Trade Commission around information security – requiring Twitter to establish an external security audit program . This paints a picture to me that in the future we will see expanding governance over the protection of personal information in the cloud. Now I will share some wisdom on ways to solve emerging compliance challenges in the cloud. The wisdom is… I don’t think the solution changes much over traditional networks – just continue to implement strong security best practices. Indeed, if Twitter, among other key security best practices, had implemented controls around password enforcement, use of default administrative passwords, and managing access to information, they probably would not have had an issue with the FTC. So pick a best practice – there’s a bunch out there to use as a starting point (i.e. CobiT, ISO, etc.), and mature your security practices to better protect your network and its invaluable data.
Securing Virtualization of IT Infrastructure
This is a challenge that is not just daunting, but can be a major headache depending on how far the data has moved towards 3rd party management and away from central control. My words of wisdom on this initiative are that organizations must leverage technology and proven security best practices to their best advantage and ensure that 3rd party providers are contractually required to support those technologies and controls (see next section on the latter topic). So how can companies leverage technologies and best practices in the private cloud? It’s actually not that hard. A virtualized server still acts as a server that can have all the same protections of a physical server (i.e. anti-virus, host side intrusion detection, proper identity access, and the like). When implemented, these layered security technologies will greatly improve the security posture of data stored on those servers. In addition, the logs and events from those virtualized servers can be collected, correlated and analyzed to detect security exploits and policy violations. The best practices of managing the security of virtualized servers should really be no different than managing the security of physical servers – ensure the infrastructure supports the best practice of collecting, analyzing, and correlating security events – something provided by a proper security information and event management (SIEM) solution.
With that understood, be aware that there are potential blind spots in a cloud where the virtual technology must evolve to better support the security of the virtual systems. For example, vendors that market virtual technologies should provide sufficient event logging to allow those that manage the cloud infrastructure to assess relevant activity within the virtual machine, as well as between the virtual machine and the physical network infrastructure. In another example, a VM hypervisor should be able to report when new VMs are created, taken down, modified, etc. Another blind spot to note is the ability to monitor network activity amongst virtual hosts in the virtual machine. In this area,the vendors that market virtual technologies must allow for the virtual networked to be tapped and monitored. As an example, VMware provides a virtual tap that enables other products to monitor network traffic within the virtual machine. Q1 Labs has a unique product in this area, called teh vFlow Virtual Activity Monitor, which allows network activity monitoring within a VMware virtual machine.
Securing Externally Hosted Services
To me this one seems like a no brainer. If your 3rd party cloud providers can’t ensure the level of access needed to properly protect the hosted data – then look for another provider or keep the services in house. If you do go with a 3rd party make sure the contract for services include proper access for monitoring the environment. This comes back to what the author of the aforementioned article means when he says “The harder things are to manage, the harder they are to secure”. If your 3rd party does not provide management access – you’ll never be able to ensure the integrity of the data on the managed systems.
To summarize, a few key thoughts when looking to secure information in the cloud:
- Implement well accepted security best practices, which also defines expectations for 3rd party providers
- Leverage appropriate technologies to your advantage (Log Management, SIEM, virtual network activity monitoring, etc.)
- Ensure the ability to monitor and correlate information in the cloud; this includes making sure 3rd party providers offer access to required management and security functions
High Availability (HA) SIEM and Log Management functionality address the demand for scalable solutions that enable network and security teams to process, correlate and store more logs, events and network activities without interruption from network failures, system failures or scheduled downtime. However, not all HA solutions are the same, and when selecting a high availability option it is important that you consider a few critical components:
- Full disk synchronization without costly manually-implemented software and storage solutions
- Automated fail-over between the primary and the high availability appliance in the event of primary appliance or network failures
- Automated connectivity tests to all appliances within a distributed deployment, including network devices such as switches and routers, to determine when or if a fail-over occurs
- Eliminate third-party fault management products – HA should be integrated into your SIEM and Log Management solution
- Built-in disk synchronization that replicates all data, such as configuration, logs, flows, and reports in real-time from the primary appliance to the secondary high availability appliance
- Does not require database clustering. Complex database clustering and third-party fail-over management products are expensive to deploy and maintain. HA should utilize a streamlined architecture which can significantly reduce or eliminate the need for additional set up and professional services costs
- Automatic synchronization between the primary and additional HA appliances
- Flexible deployment options that allow organizations to add additional high availability appliances on an as-needed basis
Remember, not all HA solutions are the same, and your ability to deploy and scale your SIEM and Log Management solution – as well as protect your network – can depend on having the right platform for your organization.