In the same way that cloud computing revolutionized data centers via unprecedented access to a pool of centralized resources, so too will edge computing revolutionize data processing… by decentralizing it again.
Of course, there’s more to it than that. Edge computing is really about moving some processing back to individual devices — or near them, at least — in order to reduce latency and optimize load. This reduces management headaches and limits the need for devices to send and receive data, particularly when it is time-sensitive.
The Shift Back to Decentralization
When constant connections first became the norm, cloud computing became the standard for many connected devices and applications. However, with the advent of smart gadgets, the centralized nature of the cloud has become a hindrance when data needs to be processed and consumed as quickly as possible.
This problem is likely to compound as we transition to 5G, and the amount of data being collected is expected to dramatically increase. As an growing number of applications require fast aggregation and analysis of the massive quantities of data they capture, the sheer volume can quickly become a drain on any cloud architecture.
In order to overcome these limitations, many telecommunications companies have turned to edge computing. A recent AFCOM State of the Data Center Industry survey claimed that as many as 44% of data center users have deployed or will deploy an edge computing solution in the next 12 months. That’s an incredible shift from the cloud-centric approach of the previous decade.
Backup and recovery for OpenStack-based edge clouds
Learn how easy & painless data protection can be
What Edge Computing Entails
Edge computing is a distributed architecture where either all or the majority of data is processed by devices, rather than by servers in the cloud. These devices are effectively the “edge” of the computational architecture, thus the nomenclature.
The beauty of edge computing lies in the fact that it limits the need to send and receive data. The emergence of smart devices has resulted in enormous amounts of data being collected and transmitted. Much of this data is time-sensitive.
The simple truth is that our communications architecture is not capable of handling such high usage, so sending and receiving that much data will predictably result in massive latency issues. This is where edge computing becomes vital as a means to alleviate bottlenecks at the point of processing.
With cloud computing, nearly all data is processed at the server. When traffic is heavy, the processing power of the server becomes a bottleneck, slowing down responses to connected devices. Even though servers have drastically higher processing power, when demand is high enough, they do slow down. It is financially untenable to purchase enough server processing power in order to run smoothly at high demand times.
But by performing most (or all) of the data processing on the device, latency is all but removed. This becomes particularly important when considering cases of autonomous vehicles or healthcare IoT equipment like heart pumps — such data is very time-sensitive. For smart cars and similar devices, this plays a critical role in their ability to respond to changing conditions in an instant.
Is Cloud Computing Dead?
Well, no. But edge computing is having an impact on the massive, centrally-located facilities that have ruled the roost in preceding decades. Today, the majority of data centers are concentrated in Northern Virginia, Greater New York, Chicago, Dallas, Silicon Valley, and Los Angeles. Edge computing is already forcing that to change.
Edge computing works best with smaller regional data centers and may even use microdata centers located at telecom towers. Data centers operators that want to enter the edge computing market will need to invest in smaller regional facilities, along with upgrading on-site enclosures and end-user devices. It requires significant capital investment to implement such changes, but fortunately shares much of the same architecture as cloud computing.
Balancing Cloud Computing with Edge Computing
Edge computing is not a replacement for the cloud. Rather, edge computing is great for real-time data analysis and processing, while cloud computing caters to historical data that requires longer, more intensive analysis.
Companies must design their architecture based on the nature of the business and their applications. When latency is not a concern, cloud computing offers roughly the same benefits at a lower cost and provides simpler security (since every single device needs to be individually protected in edge security). When latency is a concern, edge computing is clearly superior as a result of its higher speed of processing.
In the real world, most companies that have embraced edge computing will inevitably use it as an extension and optimization of their cloud architecture. Successful hybrid architectures will produce timely data processing on the device level, with the remainder of the data processing continuing to be performed by highly elastic centralized resources.
Take a Deep Dive on OpenStack Backup
Workload recovery should be a piece of cake.
Download the OpenStack Backup & Recovery whitepaper now to learn how easy it could be.