This work focuses on the important challenge of improving energy efficiency in edge computing, a field gaining attention as the demand for low-latency, privacy-preserving and reliable computing at the network edge continues to grow. Edge computing systems are well-suited for applications that require real-time processing while reducing data transmission to centralized cloud servers. However, the limited energy resources available to edge devices present a major challenge. To address this, the research is structured around three key components: energy measurement, energy profiling and energy optimization. The thesis begins by investigating the development of precise energy measurement techniques, specifically designed for Nvidia Jetson edge devices, which are widely used for AI inference at the edge. These techniques enable accurate quantification of power use, providing essential insights into the energy dynamics of edge computing tasks and establishing a foundation for further optimization efforts. Next, it explores energy profiling, analyzing the impact of various model parameters on energy consumption. This analysis focuses on neural network inference, identifying how factors such as kernel size, input data characteristics and number of neurons affect the energy consumption on the edge. By understanding these relationships, the work provides valuable guidance for assessing the power/energy use of edge inference in real-world applications. To improve the overall energy efficiency of edge computing, the thesis introduces several innovative optimization strategies. These include techniques for workload distribution across containers in edge devices, enabling more efficient resource use, reduced computational time and lower energy consumption. Additionally, decentralized large language model inference with energy harvesting mechanism is explored on battery powered edge devices, offering new ways to enhance energy sustainability in edge networks. For this work, job scheduling algorithms based on semi-Markov model are also developed, balancing processing capacity with energy consumption in distributed edge inference systems. The findings of this research contribute to the development of more sustainable and efficient edge computing systems by providing practical, data-driven solutions for reducing energy consumption and incorporating energy harvesting capabilities, while maintaining reliability and performance of edge networks. By lowering energy usage, this work helps mitigate the environmental impact of edge systems, contributing to a greener approach in the field of computing.

Energy Efficient Edge Computing

KHOSHSIRAT, ARIA
2025

Abstract

This work focuses on the important challenge of improving energy efficiency in edge computing, a field gaining attention as the demand for low-latency, privacy-preserving and reliable computing at the network edge continues to grow. Edge computing systems are well-suited for applications that require real-time processing while reducing data transmission to centralized cloud servers. However, the limited energy resources available to edge devices present a major challenge. To address this, the research is structured around three key components: energy measurement, energy profiling and energy optimization. The thesis begins by investigating the development of precise energy measurement techniques, specifically designed for Nvidia Jetson edge devices, which are widely used for AI inference at the edge. These techniques enable accurate quantification of power use, providing essential insights into the energy dynamics of edge computing tasks and establishing a foundation for further optimization efforts. Next, it explores energy profiling, analyzing the impact of various model parameters on energy consumption. This analysis focuses on neural network inference, identifying how factors such as kernel size, input data characteristics and number of neurons affect the energy consumption on the edge. By understanding these relationships, the work provides valuable guidance for assessing the power/energy use of edge inference in real-world applications. To improve the overall energy efficiency of edge computing, the thesis introduces several innovative optimization strategies. These include techniques for workload distribution across containers in edge devices, enabling more efficient resource use, reduced computational time and lower energy consumption. Additionally, decentralized large language model inference with energy harvesting mechanism is explored on battery powered edge devices, offering new ways to enhance energy sustainability in edge networks. For this work, job scheduling algorithms based on semi-Markov model are also developed, balancing processing capacity with energy consumption in distributed edge inference systems. The findings of this research contribute to the development of more sustainable and efficient edge computing systems by providing practical, data-driven solutions for reducing energy consumption and incorporating energy harvesting capabilities, while maintaining reliability and performance of edge networks. By lowering energy usage, this work helps mitigate the environmental impact of edge systems, contributing to a greener approach in the field of computing.
24-mar-2025
Inglese
ROSSI, MICHELE
Università degli studi di Padova
File in questo prodotto:
File Dimensione Formato  
Thesis_Aria_Final.pdf

accesso aperto

Dimensione 15 MB
Formato Adobe PDF
15 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/199687
Il codice NBN di questa tesi è URN:NBN:IT:UNIPD-199687