Lightweight virtual machines (VMs) are prominently adopted for improved
performance and dependability in cloud environments. To reduce boot up times
and resource utilisation, they are usually "pre-baked" with only the minimal
kernel and userland strictly required to run an application. This introduces a
fundamental trade-off between the advantages of lightweight VMs and available
services within a VM, usually leaning towards the former. We propose VMSH, a
hypervisor-agnostic abstraction that enables on-demand attachment of services
to a running VM -- allowing developers to provide minimal, lightweight images
without compromising their functionality. The additional applications are made
available to the guest via a file system image. To ensure that the newly added
services do not affect the original applications in the VM, VMSH uses
lightweight isolation mechanisms based on containers. We evaluate VMSH on
multiple KVM-based hypervisors and Linux LTS kernels and show that: (i) VMSH
adds no overhead for the applications running in the VM, (ii) de-bloating
images from the Docker registry can save up to 60% of their size on average,
and (iii) VMSH enables cloud providers to offer services to customers, such as
recovery shells, without interfering with their VM's execution.
Abstract and BibTeX
@inproceedings{DBLP:conf/eurosys/ThalheimOUGB22,
author = {J{\"{o}}rg Thalheim and
Peter Okelmann and
Harshavardhan Unnibhavi and
Redha Gouicem and
Pramod Bhatotia},
title = {{VMSH:} hypervisor-agnostic guest overlays for VMs},
booktitle = {EuroSys},
pages = {678--696},
publisher = {{ACM}},
year = {2022}
}
Processing packets in batches is a common technique
in high-speed software routers to improve routing efficiency
and increase throughput. With the growing popularity of novel
paradigms such as Network Function Virtualization, advocating
for the replacement of hardware-based networking modules
towards software-based network functions deployed on commodity servers, we observe that batching techniques have been
successfully implemented to reduce the HW/SW performance
gap. As batch creation and management is at the very core of
high-speed packet processors, it provides a significant impact to
the overall packet processing capabilities of the system, affecting
latency, throughput, CPU utilization and power consumption. It
is commonly accepted to adopt a fixed maximum batching size
(usually in the range between 32 and 512) to optimize for the
worst case scenario (i.e. minimum-size packets at full bandwidth
capacity). Such approach may result in a loss of efficiency despite
a 100% utilization of the CPU. In this work we explore the
possibilities of enhancing the runtime batch creation in VPP, a
popular software router based on the Intel DPDK framework.
Instead of relying on the automatic batch creation, we apply
machine learning techniques to optimize the batching size for
lower CPU-time and higher power efficiency in average scenarios,
while maintaining its high performance in the worst case.
Abstract and BibTeX
@inproceedings{okelmann2021adaptive,
title={Adaptive batching for fast packet processing in software routers using machine learning},
author={Okelmann, Peter and Linguaglossa, Leonardo and Geyer, Fabien and Emmerich, Paul and Carle, Georg},
booktitle={2021 IEEE 7th International Conference on Network Softwarization (NetSoft)},
pages={206--210},
year={2021},
organization={IEEE}
}
IBM Watson Research Ctr - Online, 2022
Invited Talk
VMSH: Hypervisor-agnostic Guest Overlays for VMs
Intel Labs - Online, 2022
Invited Talk
VMSH: Hypervisor-agnostic Guest Overlays for VMs
ACM Eurosys'22 - Rennes, 2022
Conference Talk
VMSH: Hypervisor-agnostic Guest Overlays for VMs