System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Performance benchmarking of virtualized network functions to correlate key performance metrics with system activity
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013).ORCID iD: 0000-0002-6936-2435
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013).ORCID iD: 0000-0002-4825-8831
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013).ORCID iD: 0000-0001-9194-010X
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013).ORCID iD: 0000-0002-9446-8143
2020 (English)In: Proceedings of the 11th International Conference on Network of the Future, NoF 2020, IEEE, 2020, p. 73-81, article id 9249199Conference paper, Published paper (Refereed)
Abstract [en]

Industry is set to enter in a new revolution (Industry 4.0) backed by high inter-connectivity. Therefore, leveraging virtualization technology to deploy networks as virtualized network functions (VNFs) garnered attention. It helps the network operators and service providers to consolidate several VNFs on fewer of-The-shelf servers. This results in reducing the capital and operational expenditures while improving the resource efficiency. However, moving network functions from proprietary devices to standard servers comes with the profound cost of performance degradation. In order to overcome any performance issues to ensure service level agreement (SLA) requirements and before taking the solutions to real world, a sufficient verification and validation of VNFs is required. This is where Network Service benchmarking (NSB) plays a crucial role. NSB identifies any performance compromising bottlenecks by systematically evaluating the capacity of general purpose hardware resources, also know as network function virtualization infrastructure (NFVI), used to host single or multiple VNF instances. This paper presents a benchmarking methodology and framework to extract the correlation among the VNF quality of services (QoS) metrics and NFVI key performance indicators (KPls). For evaluation, VoerEir Touchstone platform is used to execute iPerf based benchmarking application to generate UDP based workload between VNFs. The results demonstrated that CPU utilization and L1-L3 cache memory are statistically correlated with packets dropped (0.43 and 0.47, respectively) and bandwidth utilization (0.99 and 0.92, respectively).

Place, publisher, year, edition, pages
IEEE, 2020. p. 73-81, article id 9249199
Keywords [en]
Bandwidth Utilization, Benchmarking, Cache Memory, Correlation, CPU, NFVI, Packets Dropped, UDP, VNF, Function evaluation, Quality of service, Transfer functions, Benchmarking methodology, Key performance indicators, Operational expenditures, Performance benchmarking, Performance degradation, Service Level Agreements, Verification-and-validation, Virtualization technologies, Network function virtualization
National Category
Computer Sciences Communication Systems
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kau:diva-83129DOI: 10.1109/NoF50125.2020.9249199Scopus ID: 2-s2.0-85097615214ISBN: 9781728180557 (print)OAI: oai:DiVA.org:kau-83129DiVA, id: diva2:1530085
Conference
11th International Conference on Network of the Future, NoF 2020, 12 October 2020 through 14 October 2020
Available from: 2021-02-21 Created: 2021-02-21 Last updated: 2023-11-14Bibliographically approved
In thesis
1. Performance Modelling and Simulation of Service Chains for Telecom Clouds
Open this publication in new window or tab >>Performance Modelling and Simulation of Service Chains for Telecom Clouds
2021 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

New services and ever increasing traffic volumes require the next generation of mobile networks, e.g. 5G, to be much more flexible and scalable. The primary enabler for its flexibility is transforming network functions from proprietary hardware to software using modern virtualization technologies, paving the way of virtual network functions (VNF). Such VNFs can then be flexibly deployed on cloud data centers while traffic is routed along a chain of VNFs through software-defined networks. However, such flexibility comes with a new challenge of allocating efficient computational resources to each VNF and optimally placing them on a cluster.

In this thesis, we argue that, to achieve an autonomous and efficient performance optimization method, a solid understanding of the underlying system, service chains, and upcoming traffic is required. We, therefore, conducted a series of focused studies to address the scalability and performance issues in three stages. We first introduce an automated profiling and benchmarking framework, named NFV-Inspector to measure and collect system KPIs as well as extract various insights from the system. Then, we propose systematic methods and algorithms for performance modelling and resource recommendation of cloud native network functions and evaluate them on a real 5G testbed. Finally, we design and implement a bottom-up performance simulator named PerfSim to approximate the performance of service chains based on the nodes’ performance models and user-defined scenarios.

Place, publisher, year, edition, pages
Karlstad: Karlstads universitet, 2021. p. 23
Series
Karlstad University Studies, ISSN 1403-8099 ; 2021:14
Keywords
Performance Modelling, Performance Optimization, Performance Simulation, Network Function Virtualization, Cloud Native Computing
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-83687 (URN)978-91-7867-199-1 (ISBN)978-91-7867-209-7 (ISBN)
Presentation
2021-06-09, 21A342, Universitetsgatan 2, 651 88 Karlstad, Karlstad, 09:00 (English)
Opponent
Supervisors
Note

Article 5 part of thesis as manuscript, now published.

Available from: 2021-05-18 Created: 2021-04-16 Last updated: 2022-03-04Bibliographically approved
2. Unchaining Microservice Chains: Machine Learning Driven Optimization in Cloud Native Systems
Open this publication in new window or tab >>Unchaining Microservice Chains: Machine Learning Driven Optimization in Cloud Native Systems
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

As the cloud native landscape flourishes, microservices emerge as a central pillar for contemporary software development, enabling agility, resilience, and scalability in modern computing environments. While these modular services promise opportunities, particularly in the transformative ecosystem of 5G and beyond, they also introduce a myriad of complexities. Notably, the migration from hardware-centric to software-defined environments, culminating in Virtual Network Functions (VNF), has facilitated dynamic deployments across cloud data centers. In this transition, VNFs are often deployed within cloud native environments as independent services, mirroring the microservices model. However, the advantage of flexibility in cloud native systems is shadowed by bottlenecks in computational resource allocation, sub-optimal service chain placements, and the perpetual quest for performance enhancement. Addressing these concerns is not just pivotal but indispensable for harnessing the true potential of microservice chains.

In this thesis, the inherent challenges presented by cloud native microservice chains are addressed through the development and application of various tools and methodologies. The NFV-Inspector is introduced as a foundational tool, employing a systematic approach to profile and analyze Virtual Network Functions, subsequently extracting essential system KPIs essential for further modeling. Subsequent research introduced a Machine Learning (ML) based SLA-Aware resource recommendation system for cloud native functions. This system leveraged regression modeling techniques to correlate key performance metrics. Following this, PerfSim is proposed as a performance simulation tool designed specifically for cloud native computing environments, aiming to improve the accuracy of microservice chain simulations. Further research is conducted on Service Function Chain (SFC) Placement, emphasizing the equilibrium between cost-efficiency and latency optimization. The thesis concludes by integrating Deep Learning (DL) techniques for service chain optimization, employing both Graph Attention Networks (GAT) and Deep Q-Learning (DQN), highlighting the intersection of DL techniques and SFC performance optimization.

Abstract [en]

In the dynamic cloud native landscape, microservices stand out as pivotal for modern software development, enhancing agility, resilience, and scalability. These services, crucial in the transformative 5G era, introduce complexities such as resource allocation, service chain placement, and performance optimization challenges. This thesis delves into these challenges, emphasizing the development and application of tools and methodologies specific to microservice chains.

Key contributions include the NFV-Inspector, which, while focusing on Virtual Network Functions, is instrumental in profiling and analyzing microservices, extracting vital KPIs for advanced modeling. Further, a Machine Learning-based SLA-Aware system is introduced for resource recommendation in cloud-native functions, utilizing regression modeling to link performance metrics. PerfSim, another simulation framework, is proposed for simulating microservice chains in cloud environments. The thesis also explores Service Function Chain (SFC) placement, aiming to balance cost-efficiency with latency optimization. The thesis concludes by integrating Deep Learning (DL) for service chain optimization, employing both Graph Attention Networks (GAT) and Deep Q-Learning (DQN), showcasing the potentials of DL in SFC optimization.

Place, publisher, year, edition, pages
Karlstad: Karlstads universitet, 2023. p. 36
Series
Karlstad University Studies, ISSN 1403-8099 ; 2023:35
Keywords
Cloud Native Computing, Service Mesh, Performance Modelling, Performance Optimization, Performance Simulation, Machine Learning, Resource Allocation
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-97377 (URN)978-91-7867-420-6 (ISBN)978-91-7867-421-3 (ISBN)
Public defence
2024-01-17, 1B309, Sjöströmsalen, Universitetsgatan 2, Karlstad, 08:30 (English)
Opponent
Supervisors
Available from: 2023-12-04 Created: 2023-11-14 Last updated: 2024-02-02Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Sharma, YogeshGokan Khan, MichelTaheri, JavidKassler, Andreas

Search in DiVA

By author/editor
Sharma, YogeshGokan Khan, MichelTaheri, JavidKassler, Andreas
By organisation
Department of Mathematics and Computer Science (from 2013)
Computer SciencesCommunication Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 196 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf