Technology is expanding and so do the terms related to it. So, today we bring you one such term; Edge Computing.
What Is Edge Computing?
“A portion of a distributed computing topology in which information processing is positioned near to the edge — where things and people produce or consume that information,” according to Gartner.
Edge computing systems are accelerating the creation or support of real-time applications, such as video processing and analytics, self-driving cars, artificial intelligence, and robotics, to name a few.
At first, there was only One Big Computer. Then, during the Unix period, we learned how to use stupid (not derogatory) terminals to connect to that computer. Then came personal computers, which marked the first time that ordinary people actually possessed the equipment that did the work.
In 2018, we are thoroughly ensconced in the cloud computing era. Although many of us still have personal computers, we largely use them to access centralized services such as Dropbox, Gmail, Office 365, and Slack. Furthermore, unlike the DVD box set of Little House on the Prairie or the CD-ROM copy of Encarta you could have enjoyed in the personal computing age, gadgets like Amazon Echo, Google Chromecast, and Apple TV are driven by cloud-based content and intelligence.
As centralized as this all seems, the truly astounding thing about cloud computing is that a significant portion of the world’s corporations now relies on the infrastructure, hosting, machine learning, and processing power of a small number of cloud providers: Amazon, Microsoft, Google, and IBM.
The recognition by these corporations that there isn’t much growth left in the cloud space has given rise to edge computing as a buzzword you should pay attention to. Almost everything that can be centralized has already been done so. The “edge” holds the majority of the “cloud’s” new opportunities.
So, what is the edge?
In this context, the term “edge” refers to a geographical dispersion. Instead of relying on the cloud at one of a dozen data centers to conduct all the work, edge computing is a computation that is done at or near the source of the data. This does not imply that the cloud will vanish. It indicates that the cloud is approaching you.
With that in mind, let’s move on from word definitions and look at what people mean in practice when they praise edge computing.
Privacy and security
Although it may seem strange to think of an iPhone’s security and privacy capabilities as an example of edge computing, it is widely acknowledged. Apple offloads a lot of security worries from the centralized cloud to its diasporic users’ devices simply by conducting encryption and storing biometric information on the device.
However, while the compute work is distributed, the definition of the compute work is maintained centrally, which makes this feel like edge computing to me rather than personal computing. To keep your iPhone secure, you didn’t have to cobble together the necessary hardware, software, and security best practises. You just paid $999 for a cellphone that recognizes your face after you trained it to do so.
For security, the management component of edge computing is critical. Consider the agony and suffering that consumers have endured as a result of poorly managed Internet of Things devices.