Why This Matters

IoT applications increasingly span edge devices, fog resources, and cloud infrastructure, requiring dynamic resource management that accounts for latency, mobility, and energy constraints. Traditional cloud-centric approaches cannot handle the latency requirements of interactive IoT services, while pure edge solutions lack the flexibility to handle resource constraints and mobility. URMILA addresses these challenges through intelligent fog server selection and latency estimation.

What We Did

This paper presents URMILA (Ubiquitous Resource Management for Interference and Latency-Aware services), a resource management middleware for IoT applications deployed across fog and edge infrastructure. The work addresses the challenge of maintaining service quality while managing resources across distributed edge devices with varying capabilities.

Key Results

The paper demonstrates URMILA's design and implementation through experimental evaluation of cognitive navigation and smart mobility services. Results show effective latency estimation and fog server selection based on user mobility and available resources. The middleware successfully maintains service quality while minimizing bandwidth consumption and energy usage.

Full Abstract

Cite This Paper

@inproceedings{Shekhar2019a,
  author = {Shekhar, Shashank and Chhokra, Ajay and Sun, Hongyang and Gokhale, Aniruddha and Dubey, Abhishek and Koutsoukos, Xenofon D.},
  booktitle = {IEEE} 22nd International Symposium on Real-Time Distributed Computing, {ISORC} 2019, Valencia, Spain, May 7-9, 2019},
  title = {URMILA:} {A} Performance and Mobility-Aware Fog/Edge Resource Management Middleware},
  year = {2019},
  pages = {118--125},
  abstract = {Fog/Edge computing is increasingly used to support a wide range of latency-sensitive Internet of Things (IoT) applications due to its elastic computing capabilities that are offered closer to the users. Despite this promise, IoT applications with user mobility face many challenges since offloading the application functionality from the edge to the fog may not always be feasible due to the intermittent connectivity to the fog, and could require application migration among fog nodes due to user mobility. Likewise, executing the applications exclusively on the edge may not be feasible due to resource constraints and battery drain. To address these challenges, this paper describes URMILA, a resource management middleware that makes effective tradeoffs between using fog and edge resources while ensuring that the latency requirements of the IoT applications are met. We evaluate URMILA in the context of a real-world use case on an emulated but realistic IoT testbed.},
  bibsource = {dblp computer science bibliography, https://dblp.org},
  biburl = {https://dblp.org/rec/bib/conf/isorc/ShekharCSGDK19},
  category = {selectiveconference},
  contribution = {minor},
  doi = {10.1109/ISORC.2019.00033},
  file = {:Shekhar2019a-URMILA_A_Performance_and_Mobility-Aware_Fog_Edge_Resource_Management_Middleware.pdf:PDF},
  keywords = {fog computing, edge computing, resource management, middleware, IoT, service latency, user mobility},
  project = {cps-middleware},
  timestamp = {Wed, 16 Oct 2019 14:14:53 +0200},
  url = {https://doi.org/10.1109/ISORC.2019.00033}
}
Quick Info
Year 2019
Keywords
fog computing edge computing resource management middleware IoT service latency user mobility
Research Areas
middleware CPS scalable AI
Search Tags

URMILA, Performance, Mobility, Aware, Fog/Edge, Resource, Management, Middleware, fog computing, edge computing, resource management, middleware, IoT, service latency, user mobility, CPS, scalable AI, 2019, Shekhar, Chhokra, Sun, Gokhale, Dubey, Koutsoukos