Living on the edge

JanBosch
4 min readOct 20, 2019

--

Photo by Nathan Shipps on Unsplash

With data- and AI-driven development taking over the world, it may easily seem that the cloud is the place where everything happens. This is where the data is stored and analyzed, where the machine- and deep-learning models run and where all the value resides. The perspective of people living in this world is that all the connected devices in the world only have the role of collecting data and uploading it to the cloud.

There is, however, a second development ongoing that’s easily missed: in an ‘internet of things’ world it’s what happens on the edge that’s the most interesting right now. Over the last weeks, I have been exposed and have talked to several companies that develop products that live on the edge of the network. These products are increasingly powerful from a computing and storage perspective.

Using heterogeneous hardware including GPUs, FPGAs and ASICs, these companies are able to develop extremely powerful solutions that, at the same time, are very efficient from an energy usage and physical space perspective. The solutions may lack some of the flexibility and scalability that cloud solutions provide, but at the same time, have major advantages in terms of security, reliable real-time behavior and context awareness.

Reflecting on this, I realized how naive the cloud-centric view actually is. Connected devices such as cameras, radars, mobile phones, radio base stations and autonomous cars generate enormous amounts of data, sometimes to the tune of gigabytes per second. Sending all this data into the cloud for processing is simply impossible with today’s technology. And even if tomorrow’s technology would be able to support it, the data generation by edge devices will have grown with an order of magnitude before that.

The natural response would be to say that we should focus on local computation and storage of data and the use of local AI models, instead. As usual, however, the answer is more complicated than that. To paraphrase Jim Collins, rather than the tyranny of the “or”, we should focus on the beauty of the “and”. Certain types of computation, storage and AI models function best on the edge whereas others are best located on an on-premise server or in the cloud.

This leads to, at least, three implications. First, many companies experience a situation that features and capabilities of their systems are becoming horizontalized, meaning that a new feature or capability requires functionality on the edge, the on-premise server and in the cloud. It’s no longer localized to one of these systems. Few architects and business development folks are well versed in all of them but the skill set to reason about the end-to-end deployment of functionality will be increasingly important.

Second, connected edge devices need to be able to support the constant evolution of functionality that’s native to the cloud. A solution where the cloud is flexible due to continuous deployment of new software but the edge devices are frozen in time will rapidly be bypassed by more flexible solutions. In the past, I’ve discussed how systems containing mechanical, electronic and software components could be architected to continuously evolve at all levels.

Third, it requires a much deeper reflection on what actually matters to the customer or the next layer in the network. For instance, when working with cameras, most engineers focus on image quality and resolution. However, the more common use cases are concerned with detecting the presence or absence of humans, counting the number of people in a room or compartment or following motion patterns. In general, detecting abnormal behavior or situations is relevant for most edge devices and there often is little need to report when things are normal. Converting the raw data into information that the customer or next layer in the network cares about requires significant computing and storage capabilities on the network edge.

Concluding, the cloud-centric perspective that many hold is actually a rather naive point of view. We need to consider the end-to-end architecture of systems and decide what, at this point in the evolution of technology, the right allocation of functionality to the edge, the on-premise servers and the cloud is. The realization that I came to this week is that the capabilities on the edge go way beyond what many people realize. So, I encourage you to live on the edge because it’s a pretty interesting and fun place to be.

To get more insights earlier, sign up for my newsletter atjan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch) or Twitter (@JanBosch).

--

--

JanBosch
JanBosch

Written by JanBosch

Academic, angel investor, board member and advisor working on the boundary of business and (software) technology

No responses yet