EMC NetWorker Setting Boundaries

Deanna Hoover

Deanna Hoover

I spent most of my career (25+ years) as a systems administrator with responsibilities for storage architecture. But after many years of supporting production environments and becoming burned out by the 7x24 on-call schedule, I made the move into presales and then technical marketing. Life is good. I am able to leverage my customer and sales experience, by helping my team understand the customer’s perspective and challenges. If you have questions, ask them here on or on EMC NetWorker Online Commnity. I'd love to chat! My life away from work consists of playing in the great outdoors - I am an adrenaline junky, triathlete, mountaineer and techno-girl.

By Deanna Hoover, Sr. Technical Marketing Manager, EMC Backup Recovery Systems

Why should you honor boundaries?

Last year I was skiing with a group that had lost a 13 year old friend during a ski accident.  The 13 year old was skiing out of bounds with an adult when they set off an avalanche. Yes, the adult encouraged the 13 year old to take the risk of skiing the untracked powder which was marked “out of bounds”.  Basically, the adult was willing to risk the life of a child simply to reach his own personal goal.   In the end, the adults survived, and the 13 year old was not so lucky.

It is very difficult for a ski resort to guarantee that skiers are fully isolated from out of bound areas that are dangerous and/or prone to avalanches.

Prior to NetWorker 8.0, customers had a similar challenge as the ski resorts when it came to defining boundaries for the NetWorker data and resources.

NetWorker 8.0 is a great fit for any customer or service provider that has the requirement to isolate data and resources.  The new feature is referred to as multi-tenancy facility.

How will a customer or service provider benefit by using the multi-tenancy facility?  Let me start by giving you an example: A NetWorker global administrator defines logical data zones for each tenant. Each logical data zone can then be assigned a tenant administrator.  A tenant administrator is responsible for the NetWorker configuration and resources within their appropriate logical data zone.  No other tenant or user has the ability to access data or resources outside of their assigned logical data zone.

It is important to note that the multi-tenancy facility will utilize the additional enhancements of NetWorker 8.0 – to name a few:

  • An architecture designed for greater performance, efficiency and scale
  • Deeper integration with EMC Data Domain — reducing backup times by 50% or more
  • Expanded support for Microsoft Applications
  • Optimized user experience and wizard-driven management

In summary, a few of the benefits you will notice with NetWorker 8. multi-tenancy facility include:

  • Isolation of end users and their data
  • Autonomy for the end user/administrator
  • Simplified chargeback for ASPs
  • Supported with new authentication and authorization and enhancements in NetWorker 8.0

You can learn more about the NetWorker 8.0 multi-tenancy facility by watching the overview video:

   

Details on the NetWorker 8.0 launch can be found on the EMC Community.

Can You See Me Now?

Tom Giuliano

Tom Giuliano

Marketer and EMC Data Protection Advisor Expert
I love to listen to customers discuss their data protection challenges, their experiences and their needs, and I’ve had a lot of opportunity to do it. For the past 15 years, I’ve brought network and storage products to market through roles in sales, product management and marketing. When I’m not driving go-to-market initiatives, identifying unique and creative methods to build product awareness or launching products, you’ll likely find me cycling, skiing, boating or running. And, who knows, maybe you’ll hear some of my more interesting experiences in one of my posts from time to time.

By Tom Giuliano, Senior Product Marketing Manager, EMC Backup Recovery Systems Division

Imagine you’re on a sailboat out in the middle of the ocean.  I’m talking way, way off shore.  You can’t see land.  You’re alone but that’s OK because you have tons of boating experience – you love adventure and are up to the challenge!  You’re confident you’ve seen it all and believe you could handle any problems that may come up. 

It’s a beautiful day – the sun is out, the wind is constant – you’re just sailing along smoothly.  You’ve charted a course across the ocean and as an experienced sailor you went prepared; you have a good pair of binoculars to enhance your vision, GPS navigation to help guide your course, and lifejackets, just in case.  Periodic checks on the navigation screen show that you’re constantly on course.  And with those binoculars you can see quite a bit further than you could with your normal eyesight.  The only limitation is the horizon and the surface of the ocean.  Other than that, if it’s out there you think you’d see it coming.  Life is good.  I wish I were there with you.

Of course, all things must and do change.  First, it starts to get overcast, then cloudy, then rain…then the sun sets.  The winds build quickly and so do the waves.  The boat feels like a cork being bobbed all over the ocean.  You tighten the straps on your lifejacket.  You have to take the sails down so you don’t capsize.  The navigation system starts chirping as you go off course.  Recalculating.  And you accidentally drop your binoculars overboard (darn, they were expensive!).  You no longer know what’s out there.  Good feeling gone.

Well, now that you’re floating out in the middle of the violent sea you have plenty of time now to think about your day job as a backup administrator.  Much like your most recent adventure you’re proactive about safety since your job is all about protecting critical business data.  You think you’ve taken the right precautions just like in your sailing:  your backup applications tell you they’re operating as you expect and you’ve set up protection policies and configurations so backups should occur on time.  You believe what each application is telling you; just like your binoculars and GPS did….until you encountered something unexpected.  Something you didn’t see coming.

The unknown and unexpected will get you every time.  The unexpected audit.  The unexpected failed backup.  The unknown configuration change.  You can plan and plan and plan and plan.  You can even have a contingency plan.  But you can’t be everywhere and see everything all the time.  It’s simply not possible.  There aren’t enough hours in the day to constantly check ALL components of your backup environment manually.  You need automated processes to keep an eye on it all, and then for that same solution to report success or failure when the business needs to know.  And, should something fail, you need to know what went wrong, when, where and how so you can correct the issue before there are business impacts.

I’m sure we can all appreciate the kinds of environments IT professionals are expected to manage on a day-in, day-out basis.   Data protection environments can be very complex, using a mixture of technologies across backup, replication, straightforward file copies, etc to manage the application data right through to the archive device.  And if you don’t have a tool like EMC Data Protection Advisor in your environment, what you get confronted with is a very rich set of element manager outputs.  What that means is that you will have data for one of these devices and how it is performing, but most critically you are the person who is responsible for pulling that together and making sense of it for the entire path.  Data Protection Advisor makes your life much easier. 

Data protection management is about unifying the views for all the element managers, and automating much of that manual collation of data and processes.  It’s all about enhanced visibility.  Those binoculars you had while sailing along smoothly on the nice sunny day were great for seeing out to the horizon, but they were of no value to show what was going on under the surface.  And they certainly couldn’t see under the surface.  What about the GPS system on the sailboat you ask?  Much like in a car it probably did a good job of identifying a path to get from point A to point B, but if you go off course it has to recalculate the route…not exactly proactive.

The bottom line is that built-in monitoring and reporting capabilities in our backup applications do a fair job of showing us what that particular backup application has accomplished.  However, it can’t tell us what the other applications have done.  We need a data protection management solution to see across multiple applications, processes and configurations, let alone see changes and proactively analyze the entire environment.  With a solution like Data Protection Advisor, we will automatically be alerted to a seemingly small configuration change in one of the backup application policies.  It may seem inconsequential but if that small change doesn’t allow a backup to occur (or worse), it’s VERY important!  DPA gives us the ability to see where we have been and where we’re going, provides confirmation that data is truly protected, and detects any issues BEFORE catastrophe occurs. 

So, just like sailing, being proactive in your data protection environment requires comprehensive visibility of your environment.  After all, you can’t protect against what you can’t see.  EMC Data Protection Advisor provides this insight to ensure your mission-critical business data is protected.

If you’re not a Data Protection Advisor user today, follow the EMC Data Protection Advisor Community Network space to read what others have to say.  Better yet, watch:

 

Disk vs. Tape for Optimizing RPO & RTO: It’s Not Even Close

Jim O'Connor

Jim O'Connor

It’s hard to believe, but I’ve been involved with Information Technology for nearly 40 years, 21 of them with EMC. Today, I’m the product marketing lead for the Disk Library for mainframe product portfolio, and in this position have the opportunity to help pioneer virtual tape solutions in the mainframe marketplace.

By Jim O’Connor, Senior Product Marketing Manager, EMC Backup Recovery Systems

When a storage disaster occurs there are two questions that immediately come to mind: Where’s my data?!, and How long will it take to restore it?! These critical questions can be answered by understanding and evaluating two specific metrics:  Recovery Point Objective (RPO) and Recovery Time Objective (RTO)

  • RPO looks backward to the last backup; it’s a measure of how much updating will be required between that point and the current state.
  • RTO looks forward (eagerly) to the moment of resumed operations. When a disaster occurs, this is the one that the business line managers will be obsessing about.

So, Where’s my data? It depends.

  • How long ago was the system backed up?
  • How often are incremental backups conducted?
  • Are backups prioritized for mission-criticality?

How long will it be before we’re up and running again?

  • How long before all the data arrives at the DR site?
  • How long will it take to load the backed-up data?
  • When was my last backup (or Recovery Point)?

The answers to these questions depend upon your backup technology. Everything is related to how much transactional activity occurs after your last backup and how long it takes to get that data after your next system failure.

Hours vs. Days – The Benefit of Virtual Tape Backup

A move to virtual tape backup can reduce RTO and RPO by many hours, and often days. The first thing that is eliminated is the need to physically transport information anywhere. So the 24, 48, 72 or 96 hour delays in transporting tape backups– after backups or disasters — are simply gone. When systems are backed up to virtual tape, the backup is securely encrypted and transmitted via TCP/IP to the interim or DR site. TCP/IP verifies that the backup arrived, and your data is exactly where it needs to be if an adverse event occurs.

In the event of a system failure, the data is already at the DR or interim site. Finding data sets is simple; each data set has its own unique VOL-SER on disk, and there is no need to mount reels on spindles. The system finds virtual tape volume #1, mounts the data set, and moves on to the next step automatically, with sub-second response time. The recovery begins immediately. The tape management system loads the files rapidly and automatically, and the system is restored in a few hours. In the world of real tape, it would normally take roughly 24 hours to get a full, restored, functioning system, which is on top of the considerable time already saved waiting for tapes to arrive from the offsite storage facility.

Should you consider a virtual tape solution?

The virtual tape backup solution was devised to allow mainframe-based companies to make use of the technological advantages of disk-based storage in a plug-and-play manner. The ancillary advantages are considerable, as well: the prioritization of data allows users to tune their RPO and RTO, on top of the highly dramatic reductions of both when transportation and tape-based restoration is eliminated. This only serves the business mission and compliance in ways that make life much easier for IT.

If you are attending Share in Anaheim (Aug. 6-10) please stop by our booth (#302) to discuss EMC’s mainframe virtual tape libraries.

Moore’s Law (Backup Administrator’s version): “Every 18 months, my pain will double.”

Stephen Manley

Stephen Manley

CTO, Data Protection and Availability Division
Over the past 15 years at both EMC and NetApp, I have traveled the world, helping solve backup and recovery challenges - one customer at a time (clearly, I need to optimize my travel arrangements!). My professional mission is to transform data protection so that it accelerates customers’ businesses. I have a passion for helping engineers pursue technical career path(without becoming managers), telling stories about life on the road and NDMP (yes, that’s NDMP).

For most of IT, riding technology curves is like jumping on a kid’s trampoline: dangerous, sure, but oh-so exhilarating (note: ignore trampoline manufacturer’s “weight limits” at your own peril).

Faster processors, larger disk capacities, flash storage and higher-performance networking enable new applications, drive server and storage virtualization and allow businesses to generate and analyze data 24×7. And all this puts enormous stress on the backup environment.

Bigger storage capacities don’t just seem to make backup window feel smaller, they actually do make them smaller.  Server virtualization squeezes the resources available for the backup.  Just imagine trying to drink the Pacific Ocean through a straw (ignore the whole “salt water” thing and focus on the amount of water), and that’s the backup challenge created by the technology curves powering the rest of IT.

How will we meet backup windows in highly virtualized production environments or in the world of big data? Inevitably, there will be a discontinuity in how we run backups. Over the past 15 years, there have been two competing backup data flows. First is the dominant “backup client” approach: Backup client reads data from the server and then sends data to an intermediary server, which writes the data to a storage device (tape, optical device, disk, etc.). To meet backup windows, backup clients now leverage incrementals, synthetic fulls and source-side deduplication. Second is the “client-free” approach: Primary data owner reads data and sends stream to a storage device. This includes database and NDMP backups as well as versioned replicas.

Traditionally, “client-free” backups have lacked either the features or the optimizations of “backup client” approaches (e.g., catalogue, source-side deduplication and optimized recovery workflows). The reasons for the disparity are too complex to analyze in a simple blog post (it’s about money) and may never be fully understood (it’s also about controlling your data by putting it in a vendor-specific format), but it’s enough to say that the market works in funny ways (don’t forget about the desire to have a footprint on every server in your environment). But that’s all about to change.

The inexorable increase in data growth will drive the ascension of “client-free” driven backup flows. The “backup client” approach is reaching its apex with source-side deduplicated backups (e.g., via Avamar or Data Domain Boost that are stored as deduped full backups on disk. What happens on a heavily loaded ESX server that lacks the CPU cycles and I/O bandwidth to scan for changed blocks? When can you pummel the mission-critical Oracle database to identify the changed data? Will the client scan for changed files on the NAS server ever complete?

The answer is simple: The backup application must depend on the data owner to tell it what data needs protection. After all, who better than the VMware, Oracle, or the NAS server to efficiently identify the data that has changed since the last backup?

To scale with the environment, backup applications and primary data owning applications/systems must collaborate: data owners need to efficiently identify changed data and backup applications need to turn that changed data into first-class backups. The partnership between primary data owners and backup application has already begun. Some common examples:

  • VMware Changed Block Tracking (CBT) ─ With VMware tracking the changed blocks between backups, applications like Avamar dramatically increase their source-side deduplication performance.
  • NAS incremental backups ─ Solutions like the Avamar NDMP Accelerator for NetApp and Celerra/VNX and CommVault-managed NetApp SnapVault transform rapidly identified changed data into full backups for long-term retention and rapid recovery.
  • Oracle’s Incrementally Updated Backups and Block Change Tracking ─ Solutions like Data Domain and Avamar combine Oracle’s high-performance, low-impact incremental forever backups with deduplication to securely store multiple full backups for reliable, rapid recovery.

Backup applications will continue to orchestrate both the backup and recovery processes, but the data owner becomes an equal stakeholder in optimizing both the backup and recovery workflows.

It seems simple enough: Over time, you won’t be able to meet your backup window with the current methods – even source-side deduplication. The applications and systems that own the primary data hold the keys to meeting the backup window, by identifying the new data to protect. Backup applications must collaborate with server virtualization vendors, primary applications and primary storage systems to deliver complete solutions around “client-free” backup workflows.

But, of course, there is nothing simple about alliances between large companies. There is nothing simple about one company ceding control and influence to another. And there is nothing simple about the ramifications to your IT team, its critical vendors, and your job responsibilities.

The technology curves driving server virtualization and expanding data sets do not lead to simple answers for the backup team. But they are implacable. Backup processes will change to cope with the curves because the existing solutions will ultimately stretch, rip and then fail completely (not unlike my son’s trampoline).

How are you scaling your backups to meet your ever-compressing backup windows today? Are you feeling the pain of Moore’s Backup Administrator Law? Have you adopted any of the collaborative solutions between primary data owner with backup solution? If so, what has been your experience?

 

Around the World: Backup in Your Words

Alex Almeida

Alex Almeida

Technology Evangelist, Data Protection and Availability Division
My passion for technology started at an early age and has never stopped. Today, I find myself immersed in data protection. Yep, I live, breathe and tweet backup, availability and archive. In fact, nothing short of fully understanding how things work will keep me from digging deeper. But when I’m not evangelizing on the benefits of backup or technology in general, I can be spotted at a New England Revolution game, behind the lens of a camera or listening to my favorite albums on vinyl. In addition to blogging for The Protection Continuum, you can find me on the EMC Community Network. Also, I'm a member of EMC Elect 2014, and I'm active in the New England VMware User Group (NEVMUG) and the Virtualization Technology User Group (VTUG). Let's get technical!

One of the things I truly love about my job is the interaction I get to have with customers. Whether it’s at a hands-on lab at industry events like VMworld, an EMC Executive Briefing Center or some other locale, being able to talk with customers about their environments — hearing firsthand what’s working and what’s not — is a privilege, and it helps us stay on the cutting edge.

But what I particularly like about these interactions — and, I have to say, makes my job a whole lot easier and more enjoyable — is that I get to talk about backup and recovery practices (and, yes, the occasional product or two) that I know work, and work well. In fact, we’ve got a library-full of customer case studies as proof positive!

A great example is Avamar. For the past year or so, we’ve been really trying to stress with customers the important role the right backup and recovery solutions can play in protecting VMware environments, and how they can actually help accelerate virtualization plans by simplifying the management and scale of VMware environments.

Avamar is able to do this through tight integration with VMware, notably vSphere’s vStorage APIs for Data Protection (VADP).  This integration enables a data-center-wide view of the virtualized environment and allows for “changed block tracking” (CBT) for both the backup and restore process, meaning customers can achieve virtual machine (VM) data protection at scale with minimal impact to applications and production VM resources. This simply isn’t possible when mixing traditional backup at scale in virtualized environments.

But don’t just take my word about it. Customers around the world are experiencing the benefits. They are seeing:

  • Increased OPEX and CAPEX savings
  • Less complexity, simplified management
  • Faster backups – from days to just hours
  • More consistent backups – providing a greater level of business protection.
  • Significant capacity savings as a result of data deduplication
  • Virtualization rates of 80% to 90% as a direct result

This is the paradigm shift you may have heard or even read about in my previous post. This is how you deliver better SLAs to your LOBs. And this is how you prepare for the future.