Change search
Link to record
Permanent link

Direct link
Publications (10 of 173) Show all publications
Chahed, H., Usman, M., Chatterjee, A., Bayram, F., Chaudhary, R., Brunstrom, A., . . . Kassler, A. (2023). AIDA—Aholistic AI-driven networking and processing framework for industrial IoT applications. Internet of Things: Engineering Cyber Physical Human Systems, 22, Article ID 100805.
Open this publication in new window or tab >>AIDA—Aholistic AI-driven networking and processing framework for industrial IoT applications
Show others...
2023 (English)In: Internet of Things: Engineering Cyber Physical Human Systems, E-ISSN 2542-6605, Vol. 22, article id 100805Article in journal (Refereed) Published
Abstract [en]

Industry 4.0 is characterized by digitalized production facilities, where a large volume of sensors collect a vast amount of data that is used to increase the sustainability of the production by e.g. optimizing process parameters, reducing machine downtime and material waste, and the like. However, making intelligent data-driven decisions under timeliness constraints requires the integration of time-sensitive networks with reliable data ingestion and processing infrastructure with plug-in support of Machine Learning (ML) pipelines. However, such integration is difficult due to the lack of frameworks that flexibly integrate and program the networking and computing infrastructures, while allowing ML pipelines to ingest the collected data and make trustworthy decisions in real time. In this paper, we present AIDA - a novel holistic AI-driven network and processing framework for reliable data-driven real-time industrial IoT applications. AIDA manages and configures Time-Sensitive networks (TSN) to enable real-time data ingestion into an observable AI-powered edge/cloud continuum. Pluggable and trustworthy ML components that make timely decisions for various industrial IoT applications and the infrastructure itself are an intrinsic part of AIDA. We introduce the AIDA architecture, demonstrate the building blocks of our framework and illustrate it with two use cases. 

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Edge/cloud computing, Internet of Things (IoT), Machine Learning, Time-Sensitive Networks (TSN)
National Category
Computer Engineering Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-94900 (URN)10.1016/j.iot.2023.100805 (DOI)001053228900001 ()2-s2.0-85159450974 (Scopus ID)
Funder
Knowledge Foundation, 20200067
Available from: 2023-05-29 Created: 2023-05-29 Last updated: 2024-02-07Bibliographically approved
Usman, M., Ferlin, S., Brunstrom, A. & Taheri, J. (2023). DESK: Distributed Observability Framework for Edge-Based Containerized Microservices. In: 2023 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit): . Paper presented at 2023 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit). 6-9 June 2023. Gothenburg, Sweden. (pp. 617-622). IEEE
Open this publication in new window or tab >>DESK: Distributed Observability Framework for Edge-Based Containerized Microservices
2023 (English)In: 2023 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit), IEEE, 2023, p. 617-622Conference paper, Published paper (Refereed)
Abstract [en]

Modern information technology (IT) infrastructures are becoming more complex to meet the diverse demands of emerging technology paradigms such as 5G/6G networks, edge, and internet of things (IoT). The intricacy of these infrastructures grows further when hosting containerized workloads as microservices, resulting in the challenge to detect and troubleshoot performance issues, incidents or even outages of critical use cases like industrial automation processes. Thus, fine-grained measurements and associated visualization are essential for operation observability of these IT infrastructures. However, most existing observability tools operate independently without systematically covering the entire data workflow. This paper presents an integrated design for multi-stage observability workflows, denoted as DistributEd obServability frameworK (DESK). The proposed framework aims to improve observability workflows for measurement, collection, fusion, storage, visualization, and notification. As a proof of concept, we deployed the framework in a Kubernetes-based testbed to demonstrate the successful integration of various components and usability of collected observability data. We also conducted a comprehensive study to determine the caused overhead by DESK agents at the reasonably powerful edge node hardware, which shows on average a CPU and memory overhead of around 2.5 % of total available hardware resource. 

Place, publisher, year, edition, pages
IEEE, 2023
Series
European Conference on Networks and Communications, ISSN 2475-6490, E-ISSN 2575-4912
Keywords
5G/6G, Edge Computing, Internet of Things (IoT), Microservices, Monitoring, Observability, 5G mobile communication systems, Containers, Digital storage, Internet of things, Visualization, Edge-based, Emerging technologies, Information technology infrastructure, Internet of thing, Microservice, Modern information technologies, Network edges, Work-flows
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-96593 (URN)10.1109/EuCNC/6GSummit58263.2023.10188344 (DOI)2-s2.0-85168418212 (Scopus ID)
Conference
2023 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit). 6-9 June 2023. Gothenburg, Sweden.
Available from: 2023-09-04 Created: 2023-09-04 Last updated: 2024-02-07Bibliographically approved
Taheri, J., Dustdar, S., Zomaya, A. & Deng, S. (2023). Edge Intelligence: From Theory to Practice (1ed.). Springer
Open this publication in new window or tab >>Edge Intelligence: From Theory to Practice
2023 (English)Book (Other academic)
Abstract [en]

This graduate-level textbook is ideally suited for lecturing the most relevant topics of Edge Computing and its ties to Artificial Intelligence (AI) and Machine Learning (ML) approaches. It starts from basics and gradually advances, step-by-step, to ways AI/ML concepts can help or benefit from Edge Computing platforms. The book is structured into seven chapters; each comes with its own dedicated set of teaching materials (practical skills, demonstration videos, questions, lab assignments, etc.). Chapter 1 opens the book and comprehensively introduces the concept of distributed computing continuum systems that led to the creation of Edge Computing. Chapter 2 motivates the use of container technologies and how they are used to implement programmable edge computing platforms. Chapter 3 introduces ways to employ AI/ML approaches to optimize service lifecycles at the edge. Chapter 4 goes deeper in the use of AI/ML and introduces ways to optimize spreading computational tasks along edge computing platforms. Chapter 5 introduces AI/ML pipelines to efficiently process generated data on the edge. Chapter 6 introduces ways to implement AI/ML systems on the edge and ways to deal with their training and inferencing procedures considering the limited resources available at the edge-nodes. Chapter 7 motivates the creation of a new orchestrator independent object model to descriptive objects (nodes, applications, etc.) and requirements (SLAs) for underlying edge platforms. To provide hands-on experience to students and step-by-step improve their technical capabilities, seven sets of Tutorials-and-Labs (TaLs) are also designed. Codes and Instructions for each TaL is provided on the book website, and accompanied by videos to facilitate their learning process. 

Place, publisher, year, edition, pages
Springer, 2023. p. 247 Edition: 1
Keywords
Cloud Computing, Distributed Computing, Edge Computing, Kubernetes, Machine Learning, System Performance
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-96753 (URN)10.1007/978-3-031-22155-2 (DOI)2-s2.0-85170163588 (Scopus ID)978-3-031-22154-5 (ISBN)978-3-031-22155-2 (ISBN)
Available from: 2023-09-19 Created: 2023-09-19 Last updated: 2024-02-07Bibliographically approved
HoseinyFarahabady, M., Taheri, J., Zomaya, A. Y. & Tari, Z. (2023). Energy efficient resource controller for Apache Storm. Concurrency and Computation, 35(17), Article ID e6799.
Open this publication in new window or tab >>Energy efficient resource controller for Apache Storm
2023 (English)In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 35, no 17, article id e6799Article in journal (Refereed) Published
Abstract [en]

Apache Storm is a distributed processing engine that can reliably process unbounded streams of data for real-time applications. While recent research activities mostly focused on devising a resource allocation and task scheduling algorithm to satisfy high performance or low latency requirements of Storm applications across a distributed and multi-core system, finding a solution that can optimize the energy consumption of running applications remains an important research question to be further explored. In this article, we present a controlling strategy for CPU throttling that continuously optimize the level of consumed energy of a Storm platform by adjusting the voltage and frequency of the CPU cores while running the assigned tasks under latency constraints defined by the end-users. The experimental results running over a Storm cluster with 4 physical nodes (total 24 cores) validates the effectiveness of proposed solution when running multiple compute-intensive operations. In particular, the proposed controller can keep the latency of analytic tasks, in terms of 99th latency percentile, within the quality of service requirement specified by the end-user while reducing the total energy consumption by 18% on average across the entire Storm platform.

Place, publisher, year, edition, pages
John Wiley & Sons, 2023
Keywords
data stream processing engines, energy-aware resource allocation algorithm, performance evaluation of computer systems
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-88072 (URN)10.1002/cpe.6799 (DOI)000736445900001 ()2-s2.0-85122132757 (Scopus ID)
Funder
Knowledge Foundation
Note

Australian Research Council, Grant/AwardNumbers: DP190103710, DP200100005

Available from: 2022-01-13 Created: 2022-01-13 Last updated: 2023-12-11Bibliographically approved
Sharma, Y., Bhamare, D., Kassler, A. & Taheri, J. (2023). Intent Negotiation Framework for Intent-Driven Service Management. IEEE Communications Magazine, 61(6), 73-79
Open this publication in new window or tab >>Intent Negotiation Framework for Intent-Driven Service Management
2023 (English)In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 61, no 6, p. 73-79Article in journal (Refereed) Published
Abstract [en]

To automate network operations and deployment of compute services, intent-driven service management (IDSM) is essential. It enables network users to express their service requirements in a declarative manner as intents. To fulfill the intents, closed control-loop operations carry out required configurations and deployments without human intervention. Despite the fact that intents are fulfilled automatically, conflicts may arise between user's and service provider's intents due to limited resources availability. This triggers IDSM system to initialize an intent negotiation process among conflicting actors. Intent negotiation involves generating one or more alternate intents based on the current state of the underlying physical/virtual resources, which are then presented to the intent creator for acceptance or rejection. In this way, the quality of services (QoS) can be improved significantly by maximizing the acceptance rate of service requests in the scenario of limited resources. However, intent negotiation systems are still in their infancy. The available solutions are platform dependent which poses various challenges in their adoption to diverse platforms. The main focus of this work is to draft and evaluate a comprehensive and generic intent negotiation framework which can be used to develop intent negotiation solutions for diverse IDSM platforms. In this work, we have identified and defined various processes that are necessary for intent negotiation. Furthermore, a generic intent negotiation framework is presented representing interactions among the identified processes, while conflicting actors engage in the intent negotiation. The results demonstrated that the proposed intent negotiation framework increases the intent acceptance rate by up to 38 percent with processing overheads less than 10 percent.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Quality of service
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-96248 (URN)10.1109/MCOM.001.2200504 (DOI)001017824400015 ()2-s2.0-85163595255 (Scopus ID)
Available from: 2023-08-08 Created: 2023-08-08 Last updated: 2023-08-09Bibliographically approved
Schulte, S., Zink, M., Pierre, G., Keahey, K., Kuno, H., Lenk, A., . . . Bashir, N. (Eds.). (2023). Message from the Chairs IC2E 2023. Paper presented at 11th IEEE International Conference on CloudEngineering (IC2E 2023), Boston, USA, September 25-28, 2023.. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Message from the Chairs IC2E 2023
Show others...
2023 (English)Conference proceedings (editor) (Refereed)
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-97906 (URN)10.1109/IC2E59103.2023.00005 (DOI)2-s2.0-85179603861 (Scopus ID)
Conference
11th IEEE International Conference on CloudEngineering (IC2E 2023), Boston, USA, September 25-28, 2023.
Available from: 2024-01-03 Created: 2024-01-03 Last updated: 2024-01-03
Alizadeh Noghani, K., Kassler, A., Taheri, J., Ohlen, P. & Curescu, C. (2023). Multi-Objective genetic algorithm for fast service function chain reconfiguration. IEEE Transactions on Network and Service Management, 20(3), 3501-3522
Open this publication in new window or tab >>Multi-Objective genetic algorithm for fast service function chain reconfiguration
Show others...
2023 (English)In: IEEE Transactions on Network and Service Management, ISSN 1932-4537, E-ISSN 1932-4537, Vol. 20, no 3, p. 3501-3522Article in journal (Refereed) Published
Abstract [en]

The optimal placement of virtual network functions (VNFs) improves the overall performance of servicefunction chains (SFCs) and decreases the operational costs formobile network operators. To cope with changes in demands,VNF instances may be added or removed dynamically, resourceallocations may be adjusted, and servers may be consolidated.To maintain an optimal placement of SFCs when conditionschange, SFC reconfiguration is required, including the migration of VNFs and the rerouting of service-flows. However, suchreconfigurations may lead to stress on the VNF infrastructure,which may cause service degradation. On the other hand, notchanging the placement may lead to suboptimal operation,and servers and links may become congested or underutilized,leading to high operational costs. In this paper, we investigatethe trade-off between the reconfiguration of SFCs and theoptimality of their new placement and service-flow routing. Wedevelop a multi-objective genetic algorithm that explores thePareto front by balancing the optimality of the new placementand the cost to achieve it. Our numerical evaluations show thata small number of reconfigurations can significantly reduce theoperational cost of the VNF infrastructure. In contrast, toomuch reconfiguration may not pay off due to high costs. Webelieve that our work provides an important tool that helpsnetwork providers to plan a good reconfiguration strategy fortheir service chains.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Cloud computing, Containers, Cost engineering, Transfer functions, Cloud-computing, Migration strategy, Multi-objectives genetic algorithms, Network functions, Networks reconfiguration, Optimisations, Resource management; Virtual network function, Virtual networks, VNF migration strategy, Genetic algorithms
National Category
Communication Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-91583 (URN)10.1109/TNSM.2022.3195820 (DOI)001142524900006 ()2-s2.0-85135744199 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2022-08-24 Created: 2022-08-24 Last updated: 2024-02-16Bibliographically approved
Gokan Khan, M., Taheri, J., Al-Dulaimy, A. & Kassler, A. (2023). PerfSim: A Performance Simulator for Cloud Native Microservice Chains. IEEE Transactions on Cloud Computing (2), 1395-1413
Open this publication in new window or tab >>PerfSim: A Performance Simulator for Cloud Native Microservice Chains
2023 (English)In: IEEE Transactions on Cloud Computing, ISSN 2168-7161, no 2, p. 1395-1413Article in journal (Refereed) Published
Abstract [en]

Cloud native computing paradigm allows microservice-based applications to take advantage of cloud infrastructure in a scalable, reusable, and interoperable way. However, in a cloud native system, the vast number of configuration parameters and highly granular resource allocation policies can significantly impact the performance and deployment cost of such applications. For understanding and analyzing these implications in an easy, quick, and cost-effective way, we present PerfSim, a discrete-event simulator for approximating and predicting the performance of cloud native service chains in user-defined scenarios. To this end, we proposed a systematic approach for modeling the performance of microservices endpoint functions by collecting and analyzing their performance and network traces. With a combination of the extracted models and user-defined scenarios, PerfSim can simulate the performance behavior of service chains over a given period and provides an approximation for system KPIs, such as requests' average response time. Using the processing power of a single laptop, we evaluated both simulation accuracy and speed of PerfSim in 104 prevalent scenarios and compared the simulation results with the identical deployment in a real Kubernetes cluster. We achieved ~81-99% simulation accuracy in approximating the average response time of incoming requests and ~16-1200 times speed-up factor for the simulation.

Place, publisher, year, edition, pages
IEEE, 2023
Keywords
performance simulator, performance modeling, cloud native computing, service chains, simulation platform
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-83686 (URN)10.1109/TCC.2021.3135757 (DOI)001004238600023 ()2-s2.0-85121842188 (Scopus ID)
Funder
Knowledge Foundation, 20200067
Note

Article published as manuscript entitled "PerfSim: A Performance Simulator for Cloud Native Computing" in Gokan Khan's (2021) licentiate thesis: Performance Modelling and Simulation of Service Chains for Telecom Clouds

Available from: 2021-04-16 Created: 2021-04-16 Last updated: 2023-11-14Bibliographically approved
Taheri, J., Villari, M. & Galletta, A. (Eds.). (2023). Preface. Paper presented at 13th EAI International Conference, MobiCASE 2022, Messina, Italy, November 17-18, 2022.. Springer, 495 LNICST
Open this publication in new window or tab >>Preface
2023 (English)Conference proceedings (editor) (Other academic)
Place, publisher, year, edition, pages
Springer, 2023. p. 133
Series
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, ISSN 1867-8211, E-ISSN 1867-822X ; 495
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-95708 (URN)10.1007/978-3-031-31891-7 (DOI)2-s2.0-85161382336 (Scopus ID)978-3-031-31890-0 (ISBN)978-3-031-31891-7 (ISBN)
Conference
13th EAI International Conference, MobiCASE 2022, Messina, Italy, November 17-18, 2022.
Available from: 2023-06-26 Created: 2023-06-26 Last updated: 2023-06-26Bibliographically approved
Taghinezhad-Niar, A. & Taheri, J. (2023). Reliability, Rental-Cost and Energy-Aware Multi-Workflow Scheduling on Multi-Cloud Systems. IEEE Transactions on Cloud Computing, 11(3), 2681-2692
Open this publication in new window or tab >>Reliability, Rental-Cost and Energy-Aware Multi-Workflow Scheduling on Multi-Cloud Systems
2023 (English)In: IEEE Transactions on Cloud Computing, ISSN 2168-7161, Vol. 11, no 3, p. 2681-2692Article in journal (Refereed) Published
Abstract [en]

Computationally intensive applications with a wide range of requirements are advancing to cloud computing platforms. However, with the growing demands from users, cloud providers are not always able to provide all the prerequisites of the application. Hence, flexible computation and storage systems, such as multi-cloud systems, emerged as a suitable solution. Different charging mechanisms, vast resource configuration, different energy consumption, and reliability are the key issues for multi-cloud systems. To address these issues, we propose a multi-workflow scheduling framework for multi-cloud systems, intending to lower the monetary cost and energy consumption while enhancing the reliability of application execution. Our proposed platform presents different methods (utilizing resource gaps, the DVFS utilized method, and a task duplication mechanism) to ensure each application's requirement. The Weibull distribution is used to model task reliability at different resource fault rates and fault behavior. Various synthetic workflow applications are used to perform simulation experiments. The results of the performance evaluation demonstrated that our proposed algorithms outperform (in the terms of resource rental cost, efficient energy consumption, and improved reliability) state-of-the-art algorithms for multi-cloud systems.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Energy, multi-cloud, multi-workflow, reliability, scheduling
National Category
Media and Communication Technology Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-97100 (URN)10.1109/TCC.2022.3223869 (DOI)001063436300034 ()2-s2.0-85144007663 (Scopus ID)
Available from: 2023-10-19 Created: 2023-10-19 Last updated: 2023-12-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9194-010X

Search in DiVA

Show all publications