Consolidating EMC VNX File Servers in the Cloud

Consolidating EMC VNX File Servers in the Cloud

Buurst SoftNAS Shares: A Use Case of How we Consolidated EMC VNX File Servers in the Cloud for Easy Access & Sync

Since our early days in 2013, SoftNAS® has seen hundreds of customers move out of their on-premises and colocation datacenters into the cloud. Today, we see an even sharper increase in the number of customers leaving their EMC VNX and other traditional NAS file servers behind, choosing to consolidate and replace aging hardware storage systems with cloud-based NAS file shares. The business impetus to make the change often begins with an upcoming hardware maintenance refresh cycle or a corporate decision to move some or all of its applications into the cloud.

Of course, the users continue to require access to their file shares – over the LAN/WAN from the office and via VPN connections while traveling and working remotely.

How to consolidate file servers for on-premises users into the cloud – a use case

One of the first issues that comes up is how do we seed tens to hundreds of terabytes of live production data from VNX file shares, where it’s actively used today, into the cloud? And then how do we maintain synchronization of file changes during the migration and transition phase until we’re ready to flip the DNS and/or Active Directory policies to point to the cloud-based shares instead?

In this use case, we’re showcasing the implementation of a solution for a well-known media and entertainment company, with dozens of corporate file shares.

A hybrid cloud solution

The initial Seeding Phase involves synchronizing the data from many VNX-based file shares into the cloud. As shown in Figure 1 below, the customer chose a 1 Gbps Direct Connect from the corporate data center to the AWS® VPC for dedicated bandwidth.

The AWS Direct Connect link was used initially for the migration phase, and now provides the high-speed interconnect from the corporate WAN for site-to-site access to the corporate file shares. Later, it became the primary data path connecting the corporate WAN with AWS and the file shares (and other applications hosted in AWS).

As shown above, a SoftNAS Platinum VM was created from a VMware OVA file and operated locally on VMware in the corporate data center. SoftNAS Platinum supports an Apache NiFi-based facility known as FlexFiles.

First, the CIFS shares on the VNX were mounted from SoftNAS. Another copy of SoftNAS Platinum was then launched on AWS® as the VNX NAS replacement. A storage pool was created, backed by four 5-terabyte EBS disk devices, configured in a RAID array to increase the IOPS and performance, and provide the necessary storage space.

A thin-provisioned SoftNAS® Volume was created with compression enabled. Data compression reduced the 20 TB of VNX data down to 12 TB. This left more headroom for growth and since the volume is thin-provisioned, the storage pool’s space was also available for other volumes and file shares that came later.

A SnapReplicate® relationship was created from the on-premise SoftNAS VM running on VMware® to the SoftNAS running in AWS. SnapReplicate performs snapshot-based block replication from the source node to the target. Once per minute, it accumulates all changes accumulated since the last snapshot, then replicates just those block changes to the target system. This is very efficient, and it includes data compression and SSH encryption.

Next, the SoftNAS team created several NiFi data flows which continuously replicated and synchronized the VNX CIFS share contents directly onto a local SoftNAS-based ZFS on Linux filesystem running in the same datacenter on VMware. These flows ran continuously, along with SnapReplicate, to actively seed and sync the VNX files share with the new SoftNAS Cloud NAS filer running in AWS.

After this phase was completed, the on-premises SoftNAS node was no longer required, so the SnapReplicate connection was deleted, leaving a copy of the VNX file share data in the cloud. Then the SoftNAS node was removed from VMware.

During the final phase of the migration, various user communities had to be migrated across dozens of file shares. To maintain synchronization during this phase, the FlexFiles/NiFi flow was moved to SoftNAS Platinum running in AWS, as shown in Figure 2 below.

During a several week period, different departments’ file shares were cut over to use the new consolidated cloud file server. Throughout that period, any straggling changes still arriving on the VNX were picked up and replicated over to SoftNAS in AWS. After all the file shares were verified to be operating correctly in AWS, the VNX was decommissioned as part of the overall datacenter shutdown project.

Successful consolidation of EMC VNX file servers

This project was on a tight timetable from the start. The entire project had to be developed, tested and then used to migrate live corporate file shares from on-premises to AWS in a matter of 45 days in order to stay on schedule for the datacenter closedown project. The project was completed without impacting the user community, who didn’t see any differences in their workflow and business due to where their file share data is hosted.

Find the right fit

Corporations are increasingly choosing cloud hosting for both data and applications. As maintenance contracts come up for renewals of popular EMC VNX, Isilon®, NetApp® and many others, customers are increasingly choosing the cloud over continuing to be in the data center and hardware business. The customer is faced with a fork in the road – stay on the hardware treadmill of endless maintenance, capacity upgrades and periodic forklift replacement – or – move it to the cloud and let someone else worry about it for a change.

Buurst SoftNAS Platinum provides multiple avenues for data migration projects like this one, including the strategy used for this project. In addition to FlexFiles/NiFi and SnapReplicate, there’s also an end-to-end Lift and Shift feature that can be used to migrate both NFS and CIFS from virtually anywhere the data sits today into the cloud. SoftNAS also operates in conjunction with Snowball in several configurations for situations involving hundreds of terabytes to petabytes of data.

Request a free consultation with our cloud experts to identify the best way forward for your business to migrate from hardware storage to cloud-based NAS.

High Performance Computing (HPC) in the Cloud

High Performance Computing (HPC) in the Cloud

High Performance Computing  Solutions (HPC Storage Solutions) in the Cloud – Why We Need It, and How to Make It Work.

Novartis successfully completed a cancer drug project in AWS. The pharma giant leased 10,000 EC2 instances with about 87,000 compute cores for 9 hours at a disclosed cost of approximately $4,200. They estimated that the cost to purchase the equivalent hardware on-prem and associated expenses required to complete the same tasks would have been approximately $40M. Clearly, High-Performance Computing Solutions, or HPC, in the cloud is a game-changer. It reduces CAPEX, and computing time, and provides a level playing field for all – you don’t have to make a huge investment in infrastructure. Yet, after all these years, cloud HPC hasn’t taken off as one would expect. The reasons for the lack of popularity of HPC in the cloud are many, but one big deterrent is storage.

Currently, available AWS and Azure services have throughput, capacity, pricing or cross-platform compatibility issues that make them less than adequate for cloud HPC workloads. For instance, AWS EFS requires a large minimum file system size to offer adequate throughput for HPC workloads. AWS EBS is a raw block device with a 16TB limit and requires an EC2 compute to front. AWS FsX for Lustre and Windows has similar issues as EBS and EFS.

The Azure Ultra SSD is still in preview. It supports only Windows Server and RHEL currently and is likely to be expensive too. Azure Premium Files, still in preview, have a 100TB share capacity that could be restrictive for some HPC workloads. Still, Microsoft promises 5GiB per share throughput with burstable IOPS to 100,000 per share with a capacity of up to 100TB per share.

Making Cloud High Performance Computing (HPC) storage work

For effective High Performance Computing solutions in the cloud, it is necessary to have predictable functioning. All components of the solution (Compute, Network, Storage) have to be the fastest available to optimize the workload and leverage the massive parallel processing power available in the cloud. Burstable storage is not suitable – withdrawal of any resources will cause the process to fail.

With the SoftNAS Cloud NAS Filer, dedicated resources with predictable and reliable functioning become available in a single comprehensive solution. There’s no need to purchase or integrate separate software and configure it. This translates to an ability to rapidly deploy the solution from the marketplace. You can have SoftNAS up and running in an hour from the marketplace.

The completeness of the solution also makes it easy to scale. As a business, you can select the compute and title storage needed for your NAS and scale up the entire Virtual cloud NAS as your needs increase.

Greater customization can be made to suit the specific needs of your business by choosing the type of drive needed, and choose between CIFs and NFS sharing with high availability.

HPC Solutions in the cloud – A use case

SoftNAS has worked with clients to implement cloud HPC. In one case, a leading oil and gas corporation commissioned us to identify the fastest throughput performance achievable with a single SoftNAS instance in Azure, in order to facilitate migration of their internal E&P application suite.

The suite was being run on-prem using NetApp SAN and HP Proliant current-gen blade servers, and remote customers connected to Hyper-V clusters running GPU-enabled virtual desktops.

Our team ascertained the required speeds for HPC in the cloud as:

  • Sustained write speeds of 500MBps to single CIFS share

  • Sustained read speeds of 800MBps from a single CIFS share

High Performance Computing in the Cloud PoC – our learnings

  • While the throughput performance criteria were achieved, the LS64s_v2 bundled nVME disks are ephemeral, not persistent. In addition, the pool cannot be expanded with additional nVME disks, just SSD. These factors eliminate this instance type from consideration.
  • Enabling Accelerated Networking on any/all VMs within an Azure solution is critical to achieve the fastest performance possible.
  • It appears that Azure Ultra SSDs could be the fastest storage product in any Cloud. These are currently available only in beta in a single Azure region/AZ and cannot be tested with Marketplace VMs as of time of publishing. On Windows 2016 VMs, we achieved 1.4GBps write throughput on a DS_v3 VM as part of the Ultra SSD preview program.
  • When testing the performance of SoftNAS with client machines, it is important that the test machines have network throughput capacity equal or greater to the SoftNAS VM and that accelerated networking is enabled.
  • On pools comprised of nVME disks, adding a ZIL or read cache of mirrored premium SSD drives actually slows performance.

 

Achieving Cloud HPC Solutions/Success

SoftNAS is committed to leading the market as a provider of the fastest Cloud storage platform available. To meet this goal, our team has a game plan.

  • Testing/benchmarking the fastest EC2s and Azure VMs (ex. i3.16xlarge, i3.metal etc.) with the fastest disks.
  • Fast adoption of new Cloud storage technologies (ex. Azure Ultra SSD)
  • For every POC, production deployment, or internal test of SoftNAS, measure the throughput and IOPS, and document the instance & pool configurations. This info needs to be accessible to our team so we can match configurations to required performance.

SoftNAS provides customers a unified, integrated way to aggregate, transform, accelerate, protect and store data and to easily create hybrid cloud solutions that bridge islands of data across SaaS, legacy systems, remote offices, factories, IoT, analytics, AI, and machine learning, web services, SQL, NoSQL and the cloud – any kind of data. SoftNAS works with the most popular public, private, hybrid, and premises-based virtual cloud operating systems, including Amazon Web Services, Microsoft Azure, and VMware vSphere.

SoftNAS Solutions for HPC Linux Workloads 

This solution leverages the Elastic Fabric Adapter (EFA), and AWS clustered placement groups with i3en family instances and 100 Gbps networking. Buurst testing measured up to 15 GB/second random read and 12.2 GB/second random write throughput. We also observed more than 1 million read IOPS and 876,000 write IOPS from a Linux client, all running FIO benchmarks. 

The following block diagram shows the system configuration used to attain these results.

The following block diagram shows the system architecture used to attain these results.

The NVMe-backed instance contained storage pools and volumes dedicated to HPC read and write tasks. Another SoftNAS Persistent Mirror instance leveraged SoftNAS’ patented SnapReplicate® asynchronous block replication to EBS provisioned IOPS for data persistence and DR. 

In real-world HPC use cases, one would likely deploy two separate NVMe-backed instances – one dedicated to high-performance read I/O traffic and the other for HPC log writes. We used eight or more synchronous iSCSI data flows from a single HPC client node in our testing. It’s also possible to leverage NFS across a cluster of HPC client nodes, providing eight or more client threads, each accessing storage. Each “flow,” as it’s called in placement group networking, delivers 10 Gbps of throughput. Maximizing the available 100 Gbps network requires leveraging 8 to 10 or more such parallel flows. 

The persistence of the NVMe SSDs runs in the background asynchronously to the HPC job itself. Provisioned IOPS is the fastest EBS persistent storage on AWS. SoftNAS’ underlying OpenZFS filesystem uses storage snapshots once per minute to aggregate groups of I/O transactions occurring at 10 GB/second or faster across the NVMe devices. Once per minute, these snapshots are persisted to EBS using eight parallel SnapReplicate streams, albeit trailing the near real-time NVMe HPC I/O slightly. When the HPC job settles down, the asynchronous persistence writes to EBS catch up, ensuring data recoverability when the NVMe instance is powered down or is required to move to a different host for maintenance patching reasons. 

Here’s a sample CloudWatch screengrab taken off the SoftNAS instance after one of the FIO random write tests. We see more than 100 Gbps Network In (writes to SoftNAS) and approaching 900,000 random write IOPS. The reads (not shown here) clocked in at more than 1,000,000 IOPS (less than the 2 million IOPS AWS says the NVMe can deliver – it would take more than 100 Gbps networking to reach the full potential of the NVMe). 

Here’s a sample CloudWatch screen grab taken off the SoftNAS instance after one of the FIO random write tests.

One thing that surprised us is there’s virtually no observable difference in random vs. sequential performance with NVMe. Because NVMe comprises high-speed memory that’s directly attached to the system bus, we don’t see the usual storage latency differences between random seek vs. sequential workloads – it all performs at the same speed over NVMe. 

The level of performance delivered over EFA networking to and from NVMe for Linux workloads is impressive – the fastest SoftNAS Labs has ever observed running in the AWS cloud – a million IOPS and 15 GB/second read performance and 876,000 write IOPS at 12.2 GB/second.

This HPC storage configuration for Linux can be used to satisfy many use cases, including: 

  • Commercial HPC workloads 
  • Deep learning workloads based upon Python-based ML frameworks like TensorFlow, PyTorch, Keras, MxNet, Sonnet, and others that require feeding massive amounts of data to GPU compute clusters 
  • 3D modeling and simulation workloads 
  • HPC container workloads.

SoftNAS Solutions for HPC Windows Server and SQL Server Workloads 

This solution leverages the Elastic Network Adapter (ENA) and AWS clustered placement groups with i3en family and 25 Gbps networking. Buurst testing measured up to 2.7 GB/second read and 2.9 GB/second write throughput on Windows Server running Crystal Disk benchmarks. We did not have time to benchmark SQL Server in this mode, something we plan to do later. 

Unfortunately, Windows Server on AWS does not support the 100 Gbps EFA driver, so at the time of these tests, placement group networking with Windows Server was limited to 25 Gbps via ENA only. 

The following block diagram shows the system architecture used to attain these results. 

The following block diagram shows the system architecture used to attain these results.

To provide high availability and high performance, which Buurst calls High-Performance HA (HPHA), it’s necessary to combine two SoftNAS NVMe-backed instances deployed into an iSCSI mirror configuration. The mirrors use synchronous I/O to ensure transactional integrity and high availability. 

SnapReplicate uses snapshot-based block replication to persist the NVMe data to provisioned IOPS EBS (or any EBS class or S3) for DR. The DR node can be in a different zone or region indicated by the DR requirements. We chose provisioned IOPS to minimize persistence latency. 

Windows Server supports a broad range of applications and workloads. We increasingly see SQL Server, Postgres, and other SQL workloads being migrated into the cloud. It’s common to see various large-scale enterprise applications like SAP, SAP HANA, and other SQL Server and Windows Server workloads that require both high-performance and high availability. 

The above configuration leveraging NVMe-backed instances enables AWS to support more demanding enterprise workloads for data warehousing, OLA, and OLTP use cases. Buurst SoftNAS HPHA allows high performance, synchronous mirroring across NVMe instances with high availability and a level of data persistence and DR required by many business-critical workloads. 

Buurst SoftNAS for HPC solutions

AWS i3en instances deliver a massive amount of punch in CPU horsepower, cache memory, and up to 60 terabytes of NVMe storage. The EFA driver, coupled with clustered placement group networking, delivers high-performance 100 Gbps networking and HPC levels of IOPS and throughput. The addition of Buurst SoftNAS makes data persistence and high availability possible to more fully leverage the power these instances provide. This situation works well for Linux-based workloads today. 

However, the lack of Elastic Fiber Adapter for full 100 Gbps networking with Windows Server is undoubtedly a sore spot – one we hope that AWS and Microsoft teams are working to resolve. 

The future for HPC in AWS looks bright. We can imagine a day when more than 100 Gbps networking becomes available, enabling customers to take full advantage of the 2 million IOPS the NVMe SSD’s remain poised to deliver. 

Buurst SoftNAS for HPC solutions operates very cost-effectively on as few as a single node for workloads that do not require high availability or as few as two nodes with HA. Unlike other storage solutions that require a minimum of six (6) i3en nodes, the SoftNAS solution provides cost-effectiveness, HPC performance, high availability, and persistence with DR options across all AWS zones and regions. 

Buurst SoftNAS and AWS are well-positioned today with commercially off-the-shelf products that, when combined, clear the way to move numerous HPC, Windows Server, and SQL Server workloads from on-premises data centers into the AWS cloud. And since SoftNAS is available on-demand via the AWS Marketplace, customers with these types of demanding needs are just minutes away from achieving HPC in the cloud. SoftNAS is available to assist partners and customers in quickly configuring and performance-tuning these HPC solutions. 

Adaptable Storage Cost & Performance – What’s Cloud Got to Do with It?

By Michael Richtberg, VP of Strategic Business Development

Growing up – we outgrew our bicycles. As adults we outgrow lots of things like our cars or houses. Businesses change, too, and so does the infrastructure we use to run it. Unfortunately, the traditional options for storage tend to be fairly inflexible. As business applications and industrial conditions change, the underlying infrastructure may no longer provide the right support.

Moving to a cloud based architecture may feel new, exciting, but maybe a little adventurous. At SoftNAS, we take your cloud journey seriously and help make it faster and easier to make the transition. We start by making it possible to move your storage and compute to the cloud platform of your choice without having to regenerate all of the applications into some new cloud based application architecture. What we mean is, you don’t have to change the frequently used file-based storage systems to leverage virtualized compute on a public cloud platform. It’s common to think you have to create a brand new object-based application to get the benefits of cloud computing. At SoftNAS, that isn’t required.

Flexibility – Adaptable Storage

Another major benefit often overlooked is the flexibility to change as your needs change. Unlike traditional terrestrial storage systems that remain fairly fixed until you get new capital budget, or the product isn’t supported any longer, cloud storage options provide infinite flexibility. Without changing your current data, you can adjust your capacity, the performance characteristics, or the type of data store to fit a combination of cost and performance requirements.

softnas.compute flexibility

Using infrastructure from SoftNAS on a cloud based virtualization platform, you can make these changes non-destructively. Our customers can move from an early proof-of-concept that may not require much performance or capacity, but they still need to see the functionality. After getting through the PoC phase, they often need a pre-product level of performance and capacity. In production, responsiveness and capacity become critical success factors. Unlike traditional storage, these transitions can occur without disruption.

The compute instance (the cloud term for your “server” that hosts the SoftNAS virtual storage appliance) can vary from a few cores and low RAM and slow networks to high core counts, expansive RAM, local SSD and fast networking… all of which SoftNAS Cloud NAS utilizes for improved performance. On the storage side, the capacity can elastically expand or contract based on need. SoftNAS can even enable simultaneous use of multiple back end storage type (data stores) for different responsiveness characteristics. As you need it, just add more capacity and you have instantly enabled the expansion of volumes and/or LUNs.

Using a cloud based architecture to host your storage isn’t just about moving it, it’s about changing the way you think about flexibility. You’re no longer locked into CapEx depreciation cycles that may create a mismatch between system capabilities and business needs. Nor do you need to be a fortune teller who must master the skills of predicting the next three years of change (or a buying to over-provision it today just to be ready for tomorrow).

adaptable storage

 

Introducing the SoftNAS No Downtime Guarantee Program

With “30% of primary data located in some form of cloud storage,” substantial losses may seem inevitable when choosing to migrate to the cloud.

A recent study on data protection, conducted by Vanson Bourne (funded by EMC), revealed data loss and downtime cost enterprises $1.7 trillion in the last 12 months.  In fact, data loss continues to climb and since 2012 numbers are up to more than 400% – which is equivalent to roughly half of Germany’s $3.6 trillion GDP (2013).

However, not all cloud storage vendors are equal.

At SoftNAS, we believe you don’t have to sacrifice enterprise capabilities to take advantage of cloud convenience, which is why the team is excited to announce the SoftNAS No Storage Downtime Guarantee Program.

SoftNAS will be available and usable without any noticeable disruption in storage connectivity, with a 99.999% uptime SLA, when operated with production workloads under SoftNAS best practices–or we will refund one month of SoftNAS service fees.

We aren’t saying one month’s refund will make up for the lost revenue, poor customer satisfaction, or damaged credibility associated with downtime, we’re simply showing our customers how confident we are in the SoftNAS product line.

Don’t take our word for it. Try SoftNAS now.

 

To Cloud or Not to Cloud – That is the Question for CXO’s

To Cloud or Not to Cloud – That is the Question for CXO’s

It’s interesting being out front where technology paradigm shifts take place. At SoftNAS, we are increasingly seeing customers facing a major fork in the road ahead and a critical decision point where their IT infrastructure is concerned:

Door #1 – Renew legacy storage array maintenance for 3 to 5 years and re-commit to our own data center
Door #2 – Move away from the legacy storage arrays and onto commodity x86 servers with software-defined storage and virtual storage apps
Door #3 – Bite the bullet and migrate mission-critical data and applications to IaaS in the cloud.

Increasingly, we are seeing companies choosing door #3, especially when faced with hundreds of thousands or millions of dollar maintenance renewals of their EMC(R), NetApp(R) or other legacy storage arrays. And industry analysts are seeing the same trend we do (as do the financial analysts, who are dealing firsthand with the fallout this market shift is causing for public company storage vendors).

I read an interesting post today on The Register and Forrester’s Henry Baltazar’s blog entitled:

Forrester says it’s time to give up on physical storage arrays – the physical/virtual storage tipping point may just have arrived. It says:

The storage industry knows that the market for physical arrays is in decline. Cheap cloud storage and virtual arrays have emerged as cheaper and often just-as-useful alternatives, making it harder to justify the cost of a dedicated array for many applications.

 

Forrester knows this, too: one of its analysts, Henry Baltazar, just declared you should “make your next storage array an app”.

 

Baltazar reckons arrays are basically x86 servers these days, yet are sold in ways that lock their owners to an inelastic resource for years. Arrays also attract premium pricing, which is not a a good look in these days of cloud services and pooled everything running on cheap x86 servers.

 

The time has therefore come to recognize that arrays are expensive and inflexible, Baltazar says, and make the jump to virtual arrays for future storage purchases.

 

Storage has been confined to hardware appliance form factors for far too long. Over the past two decades, innovation in the storage space has transitioned from proprietary hardware controllers and processors to proprietary software running on commodity X86 hardware….

 

While this is all true, I think it misses the bigger picture and increasingly the more critical decision that IT managers must make. Whether to be in the hardware business at all going forward.

Cloud platforms like Amazon’s AWS and Microsoft Azure are becoming the new pivot point for IT, when faced with major investment decisions driven by storage maintenance coming up for renewal, major expansions of existing storage arrays to support new projects or major new application projects being undertaken by DevOps teams. I expect vCloud Air, Google Cloud, HP Cloud, Rackspace Cloud and myriad other niche cloud players like FireHost will continue to attract more customers who are ready to just get out of the hardware business altogether and focus instead on the backlog of IT projects at hand, and rolling out new applications faster, easier and less expensively.

Of course, for some time many companies will continue with their historical incrementalism approach to IT, paying whatever they must and moving forward along the path of least resistance and perceived risk; however, in these times of cost containment and increased IT budget efficiency, others are now questioning their overall IT infrastructure strategy, realizing there is a fork in the road ahead…

The fundamental question is increasingly whether the company should continue to be in the data center and/or hardware business at all, or start fresh with cloud-based IaaS. For those committed to remaining in the data center and hardware business, as Baltazar correctly points out, customers are now choosing to take more ownership of their data management needs with software-defined storage, leveraging virtual storage and software-defined storage with commodity servers and storage gear.

We see all three paths being taken, and acknowledge there is no right or wrong answer – just different ways forward based upon each customer’s overall business and IT strategy, objectives and budgets constraints.

Of course, we’re happy to see customers choosing door #2 or door #3, where SoftNAS enables our customers with something they have never before enjoyed when it comes to storage – the freedom to choose whichever approach they want and need – pure cloud IaaS, software-defined storage on-premise or in the colo facility, or some hybrid combination that makes the transition easier and more incremental.

—————-
Rick Braddy is founder of SoftNAS and inventor of SoftNAS, the #1 Best-Selling NAS in the Cloud.
See us in booth #801 at VMworld 2014, where we will be demonstrating how customers now have freedom of choice where data storage and IT platforms are concerned.

What Can 45% of SMBs Who Experienced Data Loss Do Differently?

What Can 45% of SMBs Who Experienced Data Loss Do Differently?

Survey says… 45% of SMBs Experienced Data Loss, according to Storage Newsletter

More than 1,000 SMB IT professionals responded to the survey “Backing up SMBs”, which investigates backup and recovery budgets, technologies, planning, and key considerations for companies with fewer than 1,000 employees.

45% of respondents said their organization had experienced a data loss, costing an average of nearly $9,000 in recovery fees. Of those, 54% say the data loss was due to a hardware failure.

“Data is the lifeblood of any business – big or small,” said Deni Connor, founding analyst of Storage Strategies NOW. “The opportunity to provide SMBs with better and more cost-effective ways to protect and recover data is huge. While these companies may have smaller IT staffs, they collectively account for a significant portion of the total backup and recovery market.”

Additional highlights from the survey include:

SMBs spend an average of $5,700 each year to manage backup and recovery environments. While the majority of respondents (70%) are satisfied with current backup methods, nearly one-third (30%) believe their approaches and technologies are insufficient.

When it comes to DR, an even greater number of SMBs (42%) believe their company’s plans fall short. Furthermore, only 30% think all information would be recoverable in the event of a disaster.

The top technology used by SMBs to back up information is DAS. Cloud-based backup and recovery offerings have gained a footing. Currently, 30% use hosted solutions, and 14% plan to invest in a hosted offering within the next year.

Reliability and security are the top two priorities for SMBs considering hosted backup solutions. Of those currently using or planning to implement a private, hybrid, or public cloud backup platform, 77% prefer a private or hybrid approach while 23% favor a public cloud offering.

So what can be done differently to minimize data loss and outages?

Reliability is something that must be “designed into” the IT solutions that SMBs deploy. Backup solutions are becoming more plentiful and affordable – but affordable, viable disaster recovery solutions for SMBs have remained elusive.

When I was CTO for a cloud-hosted virtual desktop company, we were responsible for dozens of SMB data and IT operations – 24 x 7. I learned a lot about what works and what doesn’t. In fact, we had several close calls, including a “near-miss” where we almost lost a company’s data due to a combination of technology failure and human error (and no DR solution in place due to the high costs and limited budgets). If it hadn’t been for storage snapshots being available to use for recovery, that business would have likely lost most of its data… experiences like that made me a true believer in the importance of storage snapshots.

SMB’s need the following to properly protect their precious data and business operations:

1) UPS

Uninterrupted power is the foundation for protecting IT equipment from power failures, spikes, and transients that can destroy equipment can cause catastrophic damage (e.g., to multiple disk drives, destroying a RAID group’s protection).

2) RAID

Redundant disks with parity provide the ability to recover from one to two simultaneous drive failures.

3) Storage Snapshots

Snapshots provide the ability to recover filesystems to an earlier point in time, for rapid recovery from data corruption, accidental deletion, virus infections, and human error. I am a huge believer in storage snapshots – they have saved the day more times than I can count because you can quickly restore to an earlier point prior to the failure, and get everything back up in a matter of minutes (instead of hours or days).

4) Off-site Redundancy / Replication

It’s critical for data to be backed up and stored off-site, in case something catastrophic occurs at the primary data center. For many SMBs, the “data center’ is a 19” rack in a locked (hopefully it’s locked) room or closet somewhere in the business’ building, so having a copy of the business-critical data off-site ensures there’s always a way back from any kind of failure or even a disaster locally.

5) On-site Redundancy / High-Availability

In addition to having an off-site copy, if you can afford it, also having an on-site copy for rapid local recovery is also needed. For example, an on-site replica of the data allows failover and rapid recovery (whereas an off-site, replicated copy of data provides protection against data loss and emergency recovery).

There are many off-site backup services available today. They work well and ensure your data is backed up offsite and encrypted and stored in a secure place. The biggest challenge becomes the time required to “restore” in the event there’s a failure. It can take many days (or longer) to download several Terabytes of data using these services. How long can a business truly afford to be down? (usually not that long)

That’s why we came up with “SnapReplicate” – a way to replicate an entire copy of the data to a second system – one that is capable of being used to actually run the business in an emergency situation, where either the primary data center is destroyed or severely impaired, and the business needs to be brought back up in a matter of hours – not days or weeks.

Anyway, whatever approach you take to backup, recovery, and DR – make sure it has all of the above components – power protection to prevent catastrophic damage and outages caused by the most common culprit, RAID protection against the next most likely failure point (disk drive mechanical failure), storage snapshots (protection against corruption, infection, accidental deletions, and human error), and off-site redundancy via replication, with the ability to bring the business’ IT systems back up at a secondary data center (in case the primary data center is compromised).

How does SoftNAS address SMB Backup, Recovery, and DR needs?

SoftNAS runs on existing, commodity servers from major vendors like IBM, HP, Dell, Super Micro, and others. It operates on standard server operating systems and virtualization platforms like Windows Server running VMware. Here’s how each of the Big 5 above is addressed:

1) UPS 

It’s best practice to employ a UPS to provide battery-backed power to the servers running SoftNAS and the other servers running workloads like SQL Server, Exchange, etc. If the servers are in a data center, chances are there are multiple layers of power protection. If the servers are in a rack in the local building, investing in UPS systems with at least 20 minutes of battery operating time and an orderly shutdown process for VMware and Windows is highly recommended.

2) Two Layers of RAID protection plus automatic error detection/correction 

SoftNAS supports multiple levels of RAID – a) hardware RAID, which provides direct RAID protection at the disk controller level, and b) software RAID at the SoftNAS level to detect and recover from soft errors, bit rot and other errors that aren’t easily detected and corrected by normal RAID systems

3) Storage Snapshots and Clones 

SoftNAS provides scheduled snapshots that are automatically maintained, ensuring there are many recovery points as far back in time as you have available storage. Think of these as instant, automatic “incremental backups” that take no time and only occupy as much space as your actual data changes over time. The average SMB creates no more than about 1 GB of new data per day (30 GB per month), so it’s often possible to keep several weeks of snapshots around, especially for user files.

A “clone” is a writable copy of a snapshot – an exact image copy of the files as they were at the point in time the snapshot marker was originally taken; e.g., last night at 6 p.m.. This cloned copy becomes a writable copy that can be put to immediate use so the servers can be brought back online in a matter of minutes. The clones can also be used to restore missing or corrupted files. And because snapshots and clones do not actually copy any data, they are instantly available for rapid recovery when the chips are down…. and they have saved my customers’ data and business’ many times in a pinch.

4) Off-site Redundancy and Replication 

SoftNAS SnapReplicate(TM) provides “SyncImage”, a full backup of each data volume that occurs initially, followed by once per-minute “SnapReplicate” actions, which securely copy just the incremental data changes from the source (primary) SoftNAS to the target (secondary) SoftNAS instance, which is typically located off-site at a different data center.

5) On-site Redundancy / HA 

SoftNAS can provide on-site redundancy using SnapReplicate to a local target system, which provides the ability to rapidly recover locally. SoftNAS does not yet include high availability with automatic failover (a feature that’s under development). For now, failover is a manual process that involves a bit of manual reconfiguration, such as updating DNS entries or IP addresses on the secondary SoftNAS unit.

With the many levels of redundancy and failure protection and recovery layers provided, SoftNAS offers a high degree of protection against data loss and multiple ways to recover from failures without data loss – and because it’s available on a monthly basis, most SMBs can actually afford it today.