Posts Tagged ‘correlation’
You know that QRadar SIEM excels at collecting, correlating and reporting on unusual activity, but have you ever wondered how it performs user activity monitoring? Or what value this would have for your organization?
In this new 8-minute YouTube demo, we look at how the integration of identity and access management data enables real-time user activity monitoring. We show how QRadar can identify risky or abnormal activity of user groups such as employees with privileged access, contractors, or terminated employees.
What value would user activity monitoring provide? You might care about a number of use cases:
- A terminated employee taking action on your network (if terminated, how is he or she still on your network?)
- A privileged employee accessing databases she doesn’t usually access (is she performing malicious activity? was her account compromised by an attacker? or did her responsibilities just change?)
- Is an employee from one geography, who does not travel for business, seen performing activity in a different geography? (was his account taken over?)
- Is a contractor accessing a database or application that he doesn’t require for his job? Can he be trusted? do his actions require closer monitoring?
- And many more exmples specific to your business.
Without a SIEM solution that can correlate identity and access management data with network activity in real time, most organizations would miss these risks. But QRadar provides the visibility to know whenever a user performs activity that is risky or abnormal. Whether you want to be alerted to security and risk incidents in real-time or view automated reports periodically, QRadar makes it easy to take a proactive stance toward user risks and improve your security posture.
For more information, visit the Q1 Labs Resource Center today.
What mysteries lie solved in the mounds of unstructured data in our world? What value is there in standardizing data, as the World Health Organization is attempting to do with medical service codes?
In his latest contribution to Security Week, Chris Poulin asks these questions and delves into the value normalization could bring to data, especially in a security context. Imagine if event data followed a standard classification system, instead of being a mish-mash of vendor specific formats made up by software developers? Could event data then be more easily used to your advantage?
“There are already taxonomies for classifying vulnerabilities in the form of the Common Vulnerability Enumeration (CVE) database and Open Source Vulnerability Database (OSVDB), but not so with events. Every vendor creates their own log formats and many vendors have many formats, perhaps from acquiring multiple software applications or simply not having a development standard. In many cases the software developers just make up their own events, following neither a prescribed format for the fields nor the text within the fields. This makes parsing and categorizing events from a wide range of vendors difficult, and yet it’s a critical undertaking: normalization is the foundation of cross-system data mining and correlation.
There are a couple of main strategies for dealing with the lack of event standardization:
• Store it, perhaps making a best effort to parse the data into common, or normalized, fields, and wrap a flexible search engine around it;
• Invest significant effort into parsing and normalizing the data
The first is the simpler of the two but is largely relegated to post-event analysis; the latter requires more effort but lends itself to real-time correlation and early threat detection. The difference is log management vs. SIEM.”
Click here to read the full article, “Working toward a Unified Security Model.” To learn more about the difference between log management and SIEM, and to gain an understanding of what a next generation security intelligence solution can bring to your organization, read this whitepaper, “The IT Executive Guide to Security Intelligence: Transitioning from SIEM to Total Security Intelligence.”
Welcome to the final part of our “customer use perspective” series, where one of our biggest retail customers talks about using network flow data to add a whole new dimension to their security posture. When we talk about network flow, it’s not limited to the typical formats – i.e. NetFlow, J-Flow and sFlow. While standard network flow is useful for establishing a general understanding of network conversations, it doesn’t provide deep visibility into network activity beyond basic network characteristics such as IP address and protocol transport.
To help fill this gap, there is QRadar QFlow, which provides Layer 7 visibility (application layer) and stateful classification of applications and protocols such as voice over IP (VoIP), social media, ERP, database, and thousands of other protocols and applications. While this information is powerful on its own, it becomes extremely useful when correlated with network and security events as part of a SIEM and Log Management solution.
Watch the clip to hear how our customer is using QRadar QFlow in their environment:
What can you do with QRadar QFlow?
- Detect zero-day threats through traffic profiling
- Comply with policy and regulatory mandates via deep analysis of application data and protocols
- Monitor social media traffic
- Advanced incident analysis via correlation of flow and event data
- Continuous profiling of assets
Learn more about QRadar QFlow and be sure to listen to the full webcast to hear more about how our customer is utilizing the QRadar Security Intelligence Platform to help meet compliance regulations, centralize logs, correlate network events, and detect anomalies that other solutions might miss.
Welcome to the fourth installment of our latest “customer use perspective” series, featuring a large Q1 Labs customer who is a well known luxury brand in the retail industry. If you missed the first three, you can find them all here.
In this part of the series, our customer covers a few tips, tricks, and best practices when rolling out QRadar.
Below are a few of the high-level topics addressed by our customer, and a synopsis of their thoughts on each.
After you install the appliances, progress through interactive startup menu, setup IP addresses, DNS entries, etc., have your network hierarchy ready to go before roll-out for a quicker deployment.
Specific to reporting, there are a number of preset templates. However, it’s simple to create a report on any type of data you want to focus on.
Tech support will help you tweak and tune your installation, whether it’s via phone and/or via a secure tunnel. Our customer greatly appreciated the secure tunneling to get their request completed as fast as possible.
The last part of this series will wrap up with a focus on network flow, which can vastly improve your ability to detect anomalies. Until then, watch the first three videos in the series and check out the full on-demand webinar.
I was explaining our correlation and analytics engine the other day and it reminded me that much of the data analysis that we perform is modeled on the judicial system. In fact, we originally called our correlation capability the Judicial Systems Logic and still today we call the analysis process that runs within our product, “The Magistrate”. Now in the early days, certain analysts bloviated that this analogy was a little contrived, so over time we dropped it….shame on us; I still think it makes all sorts of sense, particularly as more and more people realize the need for greater security intelligence in their operation.
When customers feed application, network, identity, vulnerability and security data into QRadar, the Magistrate is weighing all the different evidence from the various product witnesses. The witness and associated evidence is judged according to its credibility, severity and relevance and all of these weights participate in the creation and observation of an offense. In this virtual court house, an offense is an attack against a network or infrastructure and each offense has a different magnitude. The magnitude, represented on a scale of 0-10, is the result of combining the three different measurements as they apply to monitored information.
- Credibility — Credibility indicates the integrity or validity of evidence as determined by the credibility rating from devices reporting the individual security events. The credibility can increase as multiple sources report the same event
- Severity — Severity indicates the amount of threat an attacker poses in relation to how prepared the target is for the attack
- Relevance — Relevance determines the significance of an event or offense in terms of how the target asset has been valued within the network
Our product, deployed at 1800 customers worldwide, is ultimately helping to deliver judgements on activity surfaced from those customer environments. The judgements may be driven mostly by out-of-the-box content Q1 Labs delivers, or through customized rules (or rulings ) from the customer and its security partners.
You tell me: isn’t the judicial system analogy easier to explain to your CEO than “statistical, anomaly, rules-based, flux- capacitor driven correlation”?
(btw, we do all of that too….well, not the flux capacitor bit!)