What Is Edge Computing? Why It’s Important and How It Works | eWEEK
In a nutshell, edge computing is any computing that occurs on the edge of the network rather than in a centralized server.
If you dig deeper into edge, you’ll see that edge computing deployments – often supported by cloud computing providers – are part of a distributed infrastructure, which enables the compute power to be closer to the people who produce or consume that data.
Key to the idea of edge, whether your edge deployment supports machine learning, artificial intelligence or data analytics, is that it extends resources far outside the once-dominant datacenter. Edge is forward-looking today in the same way that the datacenter was a leader some dozen years ago.
The most important part of edge technology is that it’s a form of distributed computing. If you look back at computer history, you can see a cycle between more centralized computing (like the early mainframes) to more distributed models (like networked PCs). In recent years, the trend toward cloud computing has been a move to a more diffuse, multicloud computing model. The newer trend toward edge computing is a further extension of that distributed model.
Edge Computing Examples
You might not realize it, but you probably interact with devices leveraging edge computing every day. For example, if you work in a remote office or back office (ROBO) environment with your own computing infrastructure, that’s an example of edge computing.
The smartphone you have in your pocket does edge computing. So does your car. Your printer. Probably your TV.
Here’s a non-exhaustive list of edge computing devices:
- Laptop and desktop PCs
- Edge servers
- Gaming systems
- Smart routers
- Smart appliances
- Smart home speakers
- Computerized medical devices
- Connected cars
- Autonomous vehicles
- Smart traffic lights
- Smart agriculture
- Smart grid
- Cell phone towers
- Point-of-sale devices
- Internet of Things (IoT) gateways
- Industrial Internet of Things (IIoT) gateways
- Military and defense vehicles and weapons
Edge Computing Architecture
Because there are so many different kinds of edge devices, there is no single edge architecture that covers all use cases. However, in general, most edge computing deployments do have some typical characteristics in common.
First, edge devices usually collect data from sensors. Those sensors might be part of the device itself (as in the case of smartphones and autonomous vehicles) or they might be separate (as in the case of gaming systems and many IIoT deployments).
Then the edge device does some processing and storage locally. In theory, a device could store the data at the edge indefinitely, but in most deployments, the device then sends a portion of the data up to the cloud for additional processing and analytics. Other devices and users can then access the processed data via the cloud.
It might be easier to understand this architecture by considering a particular use case. Think about the tablet-style kiosks you might see at each table in a chain restaurant. These edge devices collect data input by users, such as order information, payment details, and/or survey responses.
Those tablets then transmit all that data via Wi-Fi to a centralized server in the restaurant. That server processes and stores data, as well as forwarding it to various Internet-connected servers that process payments, monitor company financials, and analyze customer orders and survey responses. Administrators and business managers can then access that cloud-based data through various applications.
This combination of edge computing and cloud computing is becoming increasingly common in a variety of different use cases and industries.
What Are the Benefits of Edge Computing?
Edge computing offers a number of benefits over centralized computing models, including the following:
- Speed. If you process data near where it is generated, you don’t have to wait for it to go up to the cloud and back again. This reduction in latency results in faster performance.
- Reduced network loads. Today’s devices are generating so much data that it can be difficult for networks to keep up. Doing more processing at the edge reduces network bandwidth loads, freeing up capacity for the most important workloads.
- Reduced costs. Transmitting less data can also result in lower data transmission costs. This can be significant, particularly in parts of the world where mobile data fees are high.
- Improved security. If you store and process all your data in one location, that gives attackers a big, attractive target, but edge computing makes it less likely that attackers will gain a huge trove of data. In addition, edge computing makes distributed denial of service (DDOS) attacks more difficult.
- Compliance. Some regions of the world have data protection laws that require data to be stored and processed in the area where it was created. Edge computing can make it easier for organizations to comply with these regulations.
- Better reliability. Spreading your data across multiple physical locations is a fundamental tenet of disaster recovery/business continuity (DR/BC) best practices. If many smaller devices are processing your workloads, you are less likely to experience a catastrophic failure if a single device goes offline.
- Unique products and services. Edge computing makes possible a number of mobile devices that wouldn’t otherwise be available to end users. Consumer demand for “smart” products continues to rise, and these products rely on edge computing.
- Improved monitoring. Edge computing also allows businesses to keep track of a lot of things they wouldn’t otherwise be able to track. This is particularly important for smart factories, smart agriculture, smart grids, and IIoT use cases.
What Are the Challenges of Edge Computing?
As you might expect, edge computing also has some downsides. Here are some of the most significant:
- Increased maintenance burdens. If your enterprise has dozens or hundreds or even thousands of edge computing devices, your staff then needs to maintain all those devices. That can add more burden to IT departments and require staff to travel to a lot of different locations, all of which can increase costs.
- Security risks. As mentioned above, edge computing does decrease some security risks, but it also creates some others. In some cases, attackers might be able to gain access to networks by compromising an edge device. And instead of having a few centralized systems to protect, teams must now secure many smaller devices. Ensuring that each device has an adequate level of protection can be time-consuming and costly.
- Missing data. If you transmit only a subset of your data to the cloud after it has been processed at the edge, it’s possible that you might be missing a critical piece of information. Organizations need to design their edge computing environments with care to ensure that they have access to all relevant information when they need it.
- Scalability. Scaling out your edge computing network requires you to deploy new hardware. That is generally more difficult than scaling in a cloud computing environment.
- Environmental challenges. Cloud servers live in highly controlled data centers, but edge devices are often outdoors where they can be affected by weather, dust, pollution, human beings, or even animals. Depending on the use case, you might need to design your edge devices so they can withstand being hit by lightning, run over by a truck, or nibbled by wildlife.
- Lack of standards. In many edge computing use cases, industries have not yet settled on one standard for key pieces of the technology stack. This makes it difficult to achieve interoperability, and it exposes organizations to the risk that they might bet on a technology that soon becomes obsolete.
- Logistical hurdles. Getting an edge computing environment up and running can be difficult from both a technological and human resources point of view. For example, organizations may face challenges in powering devices, ensuring that devices turn on automatically when necessary, or even finding room for devices in use cases where physical space is limited. While these hurdles are not insurmountable, organizations should consider them before embarking on an edge computing initiative.
Why Is Edge Computing Important?
Despite these challenges, enterprises should be paying attention to the edge computing trend and considering how their company might participate. Here’s why:
- The market for edge computing devices is huge—and growing. According to IDC, the edge computing market will be worth about $250.6 billion in 2024. And the market is growing every year. Your competitors are certainly participating in this trend, and if you want to keep up, you need to be examining how your company could profit.
- Edge computing complements cloud computing. For more than a decade, cloud computing has been on the rise. But it doesn’t make financial sense to do all your data processing and storage in the cloud—and it might not be feasible for security or compliance reasons as well. A solid edge computing strategy is often a necessary balance for a good cloud computing strategy.
- Edge computing can make enterprises more efficient. As previously mentioned, edge computing can save both time and money. Organizations should carefully examine ROI to determine where edge computing might make sense for their operations.
- Edge computing enables new products and services. Consumers and businesses alike are looking to purchase new products and services that integrate computing capabilities into daily life. Edge computing can open up new business models and new ways of serving customers.
- Edge computing can make life better. Edge computing drives a whole lot of innovation that makes the world safer and more enjoyable. It helps make cars safer, shopping and dining more convenient, farms and factories more productive, supply chains more efficient, and living rooms more fun.
- Edge computing is here to stay. Given the tremendous benefits offered by edge computing and how deeply it is integrated into daily life, it’s highly unlikely that edge devices will be going away anytime soon. Instead, as with cloud computing, it’s likely that we’ll just begin to think of edge computing as “computing.”