The Right Ingredients For Staying Ahead of The Bad Guys

shutterstock_180545660

One of the common threads you hear about in major data breaches these days is that the victim’s security team had alerts or events that should have clued them into the fact an attack was underway. In today’s complex security infrastructures it’s not unusual to have security operators/analysts receiving tens of thousands of alerts per day! Security monitoring and incident response need to transition from a basic rules-driven eyes-on-glass SIEM capability to a big data and data science solution. I frequently speak with customers about how IT Security needs to be able to handle a lot more information than current SIEM tools can support, and one question that always comes up is “what information needs to be collected and why?”, so here we go.

To start with you still need to collect all of those alerts and events from your existing security tools. While maintaining eyes-on-glass analysis of each individual alert from every tool isn’t feasible, a security analytics tool can analyze and correlate those events into a group of related activities that can help an analyst understand the potential impact of a sequence of related events instead of having to slice and dice the events manually.

The second type of information is infrastructure context – what’s in the environment, how’s it’s configured, how it’s all related and what is its impact? The analytics system needs to understand what applications are running on what servers connected to which network and what storage. By having access to these relationships the analytics tool can identify the broad-based impact of an attack on a file server by understanding all of the applications that access that file server and weight the alert accordingly. Which brings up another critical point – assets need to be classified based on their potential impact to the organization (aka security classification). If the tool identifies suspicious sequences of activity on both a SharePoint site used to exchange recipes and an Oracle database containing credit card numbers but doesn’t understand the relative value of each impacted asset it can only present both alerts as being of equal impact and let the operator decide which one to handle first. So a consolidated, accurate, up-to-date and classified system of record view your environment is critical.

Events event logs from all of those infrastructure components are the 3rd type of information; not just security events but ‘normal’ activities events as well. This means all possible event logs from operating systems, databases, applications, storage arrays, etc. Given that targeted attacks today can almost always succeed in getting into your infrastructure, these logs can help the analytics tool identify suspicious types of activities that may be occurring inside your infrastructure, even if the events don’t fall into the traditional bucket of security events. Here’s an example – a storage administrator makes an unscheduled snapshot of a LUN containing a database with sensitive data on a storage array, then mounts it on an unsecured server and proceeds to dump the contents of the LUN onto a USB device. The storage array logs show that someone made an unauthorized complete copy of all of your sensitive data, but if you weren’t collecting and analyzing the logs from that storage array you would never know it happened.

The fourth type of information a security analytics tool needs is threat intelligence – what are the bad guys doing in the world outside of your environment. A comprehensive threat intelligence feed into the security analytics tool will allow it to identify attempted communications with known command and control systems or drop sites, new attack tools and techniques, recently identified zero-day vulnerabilities, compromised identities and a host of other information that is potentially relevant. A subscription-based solution is a great solution to this.

The final type of information an analytics tool needs are network packets. Being able to identify a sequence of events that points to an infected server is only the first step – the analyst then needs to determine when the infection occurred and go back and replay the network session that initiated the infection to identify exactly what happened. Think in terms of a crime investigation – with a lot of effort and time the CSIs may be able to partially piece together what occurred based on individual clues, but being able to view a detailed replay of the network activities that led up to the infection is like having a complete video recording of the crime while it happened. Again the goal is to provide the analyst and incident responder with complete information when the alert is raised instead of the having to spend hours manually digging for individual bits.

The volume of information and amount of effort necessary to quickly identify and respond to security incidents in today’s environment is huge, which is why big-data and data science-based tools are absolutely critical to staying ahead of the bad guys.

 

John McDonald
John McDonald is a Senior Architect in EMC's Trust Solutions Group, where he is responsible for developing and communicating trust-based solutions that encompass all of EMC's, RSA's and VMware's products. He has over 30 years of experience in the IT industry in general and IT Security in particular, and has worked extensively as a consultant, developer and evangelist across all industries and virtually all major areas of IT and security technology. He has spoken at dozens of industry and vendor IT and Security events, and has written over 20 whitepapers for EMC and RSA. John is also a CISSP and has held certifications in several other areas, including disaster recovery, Microsoft technology and project management.

IT’s New Dirty Little Secret

170443549
A colleague of mine recently came across an article I wrote when I was doing some consulting work for a data protection company nearly 10 years ago.

While it feels more than a lifetime ago that I wrote that piece, as I read through it, it struck me just how little some things have changed. It’s as if time has stood still… creating a pocket of inertia.

In fact, with only a few product/technology updates, a new title and a July 2014 time stamp, the piece could run today, likely without even an eyebrow raised. Heck, I’d go so far as to wager that if the article were to run, more than a few would chime in on the tape versus disk theme that runs through it.
Continue reading

Heidi Biggar

Heidi Biggar

Marketing and IT Consultant, Data Protection and Availability Division at EMC Corporation
I’m often asked how a political science major at Tufts wound up in the IT world, covering backup, storage, virtualization and cloud of all things. Truth is, it’s really a love for learning, a need to understand the “bigger picture” and a desire to share that view with others that’s steered my path over the past 20 years, from campaign manager to editor, analyst and marketer. After hours, you’ll find me hanging with family, running 10ks through Peachtree City’s 90 miles of cart paths, watching football or reading. I’m a New England transplant enjoying life in the South. In my previous life, I also blogged for ComputerWorld, Enterprise Strategy Group and Hitachi Data Systems, but The Backup Window is my baby. It's been great watching it evolve.

VPLEX and RecoverPoint Integration with XtremIO: A Customer Case Study

It’s not a secret that one of my favorite products within EMC is VPLEX.  VPLEX is a product that means different things to different people.  Some look at it as a data migration solution while others look at it in its true flash and glory meaning – a distributed cache that is virtualizing your underlying storage and provides an active – active site topology.  For example, you can have Site A in NY distributed to Site B in a metro location (up to 10ms latency for VMware HA and vMotion) and have simultaneous read/write access to the same data across these two data centers. Continue reading

Ashish Palekar
I have been around the storage market all throughout my career. It started with learning Fibre Channel and iSCSI from the ground up and contributing to the standards body. My next gig was building processors at Trebia Networks to mediate between converting from FC to iSCSI. From there, I joined EMC. At EMC, I have focused on storage virtualization in various roles – developer, dev manager and product manager. My primary passion these days is helping our customers, partners and engineers understand why the products we are building are game changing. Equally importantly, I am focused on helping us build a business around our products and staying ahead of the market. And that means looking a little beyond the next curve and dreaming about all that can be. And you know what, it is a lot of fun!

The DPArmy live from “Redefine Possible” in London

So the Doctor landed in the Old Billingsgate Market, in London on Tuesday to help EMC reveal a number of product announcements. Possibly for most, the main news of the day was the announcement that the existing high end storage array VMAX would be replaced with VMAX3. This updated array also included an updated and rebranded version of the operating system called HYPERMAX OS. Continue reading

Mark Galpin

Mark Galpin

EMEA Product Marketing, Data Protection and Availability Division
As a product marketing lead based in Guildford, Surrey, I'm often seen presenting to EMC’s partners and end users at various events across Europe. I have over 20 years experience in the storage market, largely gained in the financial and legal sectors, including PaineWebber, part of UBS, and Clifford Chance, the international legal practice, where I was the storage manager for a number of years. But I've also held had product marketing stints at Quantum and previously at EMC. I'm married with two children and live in Guildford, Surrey.

Optimized Isilon Backup and Recovery … It’s a Snap!

Isilon_Avamar_Backup

EMC Avamar NDMP Acceleration plus Isilon Fast Incremental

EMC’s new Fast Incremental technology revolutionizes backup and recovery for Isilon Scale-Out NAS. This new feature coupled with EMC Avamar NDMP client 7.1 performs backups up to 3x faster.

Continue reading

Phil George

Phil George

Sr. Product Marketing Manager, Data Protection and Availability Division at EMC
Working with customers and partners (like VMware) to develop leading backup solutions makes every day very interesting; helping them optimize their backup architectures for virtualized environments is what really energizes me. Over the past 25 years, I’ve held senior engineering, marketing and sales roles within the technical software industry. This gives me a good vantage point to recognize technical challenges, see emerging trends and propose new solutions. I hold a BSEE from Cornell University and a Masters in Computer Engineering from Boston University. I currently reside with my wife and two children in Massachusetts.