The rise of data-intensive applications such as \gls{ai}, \gls{iot} and 5G, coupled with the disaggregation of data centers, has placed immense pressure on the network infrastructure. This has led to the cloud edge continuum, where computational resources span from edge devices to centralized data centers. This thesis addresses the challenge of optimizing resource efficiency across this continuum. First, we explore how \gls{ai} can be leveraged for intelligent orchestration of network resources. Although \gls{ai}-driven orchestration holds great promise, it often overlooks the cost of learning, the resources consumed by \gls{ai} tasks such as training and decision making. To bridge this gap, we introduce a cost-aware resource management model that balances resource allocation efficiency and \gls{ai} performance. This model ensures that \gls{ai}tasks (the learning plane) do not consume resources needed for user-facing services (the data plane). Building on this, the second part addresses execution efficiency to complement intelligent scheduling. Although \gls{ai}-driven orchestration optimizes resource utilization, gains can be overshadowed by inefficiencies in CPU execution of virtual network functions. As data volumes grow, CPUs struggle to meet processing demands, leading to high costs per packet in terms of latency, power consumption, and other resources. This challenge highlights the need for programmable hardware like FPGAs and SmartNICs. Although these specialized accelerators offer high performance, they often lack runtime flexibility, limiting adaptability to changing workloads. To overcome this, we introduce an abstraction layer that virtualizes match tables--a critical component of packet processors. This framework provides runtime configurability and flexibility, enabling scalable and efficient networking systems.

Optimizing Resource Efficiency in the Cloud-Edge Continuum

LAHMER, SEYYID AHMED
2025

Abstract

The rise of data-intensive applications such as \gls{ai}, \gls{iot} and 5G, coupled with the disaggregation of data centers, has placed immense pressure on the network infrastructure. This has led to the cloud edge continuum, where computational resources span from edge devices to centralized data centers. This thesis addresses the challenge of optimizing resource efficiency across this continuum. First, we explore how \gls{ai} can be leveraged for intelligent orchestration of network resources. Although \gls{ai}-driven orchestration holds great promise, it often overlooks the cost of learning, the resources consumed by \gls{ai} tasks such as training and decision making. To bridge this gap, we introduce a cost-aware resource management model that balances resource allocation efficiency and \gls{ai} performance. This model ensures that \gls{ai}tasks (the learning plane) do not consume resources needed for user-facing services (the data plane). Building on this, the second part addresses execution efficiency to complement intelligent scheduling. Although \gls{ai}-driven orchestration optimizes resource utilization, gains can be overshadowed by inefficiencies in CPU execution of virtual network functions. As data volumes grow, CPUs struggle to meet processing demands, leading to high costs per packet in terms of latency, power consumption, and other resources. This challenge highlights the need for programmable hardware like FPGAs and SmartNICs. Although these specialized accelerators offer high performance, they often lack runtime flexibility, limiting adaptability to changing workloads. To overcome this, we introduce an abstraction layer that virtualizes match tables--a critical component of packet processors. This framework provides runtime configurability and flexibility, enabling scalable and efficient networking systems.
24-mar-2025
Inglese
ZANELLA, ANDREA
Università degli studi di Padova
File in questo prodotto:
File Dimensione Formato  
thesis_Seyyidahmed_LAHMER-1.pdf

accesso aperto

Dimensione 1.69 MB
Formato Adobe PDF
1.69 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/200404
Il codice NBN di questa tesi è URN:NBN:IT:UNIPD-200404