At Trilio, we refer to workloads as a set of virtual machines that run application sets. And today more than ever, workload agility is critically important.
The legacy approach to workloads is bogged down by inherent challenges. With workloads directly tied to the hardware that they sit on, end-users are impeded from becoming agnostic by logistics alone. Add to that the many hard and soft costs associated with the legacy approach — things like servers, storage, infrastructure, staffing costs for IT people, IT spend to manage the components, and multiply those across the physical locations being operated. The costs multiply exponentially quite rapidly, especially as a business scales.
The challenges don’t end there, though. More often than not, legacy tools do not satisfy Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) needs. Freshness of data and best last known states become a tremendous issue when data sets need to be recovered. The larger the data set, the longer the time required for restoration. And the “costs” extend even further when you think about snowball effect these things can have into other aspects of the business (running analytics on stale data, delaying release cycles, etc.).
It’s clear that the legacy approach is not sufficient to fully meet the needs of today’s businesses. Moving towards greater data and workload agility is imperative. To ensure true workload agility, workloads and data need to be portable, and accessible anytime and from any location. They also need to be cloud agnostic, so infrastructure does not matter — designed across any cloud, any geography, and any hardware landscape. To do this, capturing the application stack inclusive of operation systems, compute, network configurations, security groups, data and metadata into one efficient and portable universal point in time format is the best first step.