within the Industrial Doctoral School at Umeå University.
The explosive growth of mobile and Internet of things (IoT) devices, the computational needs of the applications they run, and the amount of data they produce requires high performance computing capabilities near these devices to handle their data, and computations.
It is already today common to offload parts of the computations and data storage from these devices to remote large scale cloud datacenters. Every time one uses a mobile application that entails, for example, voice recognition like Apple’s Siri, a connection is established to a cloud datacenter. Offloading to remote large scale clouds has fundamental communication limitations as large latencies restrict the kind of applications supported by such offloading.
The edge cloud is a novel architecture where small-scale cloud resources are added at the edge of a mobile network close to the users, unifying the management and control of access network, the resources added to the edge, and the remote large-scale datacenters. Edge cloud is about moving processing and intelligence closer to the end-users or origin of data.
However, due to user mobility, hardware heterogeneity, and increased flexibility in deciding where computing capacity can be used, the Edge Cloud brings significant challenges in analyzing, predicting, and controlling resource usage and allocation to optimize cost and performance, while delivering expected end-user Quality-of-Service (QoS). In addition, new applications, such as 360 video streaming, real-time video analytics, and self driving cars, add more complexity to the problem of edge resource management.
Finally, the performance focus of edge-based services has implications on configuration of end-user connectivity between edge datacenters and end-users, hence there needs to be coordination between the resource management of edge clouds, and connectivity configuration, e.g., how to handle session continuity during user mobility between edge datacenters?