Enhancing Latency Performance through Edge-Enabled Cloud Architectures for Real-Time Data-Intensive Applications in Distributed Computing Environments

Authors

  • Ijeoma Oluwaseun Nnaji Cloud Solutions Architect, Nigeria. Author

Keywords:

Edge computing, Cloud architecture, Real-time applications, Latency reduction, Distributed systems, Hybrid computing, Data-intensive processing, QoS optimization

Abstract

Purpose

The proliferation of real-time data-intensive applications in distributed computing environments necessitates architectural innovations that address latency and scalability challenges. This paper explores the integration of edge-enabled cloud architectures to enhance latency performance, aiming to optimize resource provisioning and task offloading for real-time responsiveness.

Design/methodology/approach

A hybrid architectural model is proposed, combining edge computing for immediate local data processing with cloud infrastructures for deep analytics and storage. Simulated experiments and performance modeling are employed to evaluate latency improvements in varied distributed computing workloads. The methodology incorporates network performance metrics and resource allocation strategies.

Findings

Results demonstrate a significant reduction in average latency, jitter, and packet loss when leveraging edge-enabled cloud architectures. The hybrid model ensures improved Quality of Service (QoS), especially in applications requiring low-latency guarantees such as autonomous systems, smart manufacturing, and real-time video analytics.

Practical implications

The proposed architecture offers an operational model for industries requiring time-sensitive computing solutions. It can be adopted in intelligent transportation systems, emergency response coordination, and industrial IoT environments to increase reliability and responsiveness.

Originality/value

This work contributes to the ongoing evolution of distributed computing by proposing a cohesive architectural framework that operationalizes edge-cloud integration. It fills a gap in practical latency management strategies for data-intensive, real-time workloads in heterogeneous environments.

References

Satyanarayanan, Mahadev, et al. “The Emergence of Edge Computing.” Computer, vol. 50, no. 1, 2017, pp. 30–39.

Chiang, Mung, and Tao Zhang. “Fog and IoT: An Overview of Research Opportunities.” IEEE Internet of Things Journal, vol. 3, no. 6, 2016, pp. 854–864.

Zhou, Zhi, et al. “Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing.” Proceedings of the IEEE, vol. 107, no. 8, 2019, pp. 1738–1762.

Li, Yan, et al. “Intelligent Task Offloading for Edge-Cloud Computing: A Deep Reinforcement Learning Approach.” IEEE Transactions on Industrial Informatics, vol. 17, no. 9, 2021, pp. 6200–6209.

Shi, Weisong, et al. “Edge Computing: Vision and Challenges.” IEEE Internet of Things Journal, vol. 3, no. 5, 2016, pp. 637–646.

Bonomi, Flavio, et al. “Fog Computing and Its Role in the Internet of Things.” Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing, 2012, pp. 13–16.

Mao, Yuyi, Changsheng You, Jun Zhang, Kaibin Huang, and Khaled B. Letaief. “A Survey on Mobile Edge Computing: The Communication Perspective.” IEEE Communications Surveys & Tutorials, vol. 19, no. 4, 2017, pp. 2322–2358.

Deng, Ruilong, Rongxing Lu, Chengzhe Lai, and Tom H. Luan. “Optimal Workload Allocation in Fog-Cloud Computing Toward Balanced Delay and Power Consumption.” IEEE Internet of Things Journal, vol. 3, no. 6, 2016, pp. 1171–1181.

Taleb, Tarik, et al. “On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration.” IEEE Communications Surveys & Tutorials, vol. 19, no. 3, 2017, pp. 1657–1681.

Varghese, Blesson, et al. “Challenges and Opportunities in Edge Computing.” IEEE International Conference on Smart Cloud, 2016, pp. 20–26.

Downloads

Published

2026-01-06