Change search
Link to record
Permanent link

Direct link
Publications (10 of 77) Show all publications
Garshasbi Herabad, M., Taheri, J., Ahmed, B. S. & Curescu, C. (2026). An Overview of Technical Aspects and Challenges in Designing Edge-Cloud Systems. Applied Sciences, 16(3), Article ID 1454.
Open this publication in new window or tab >>An Overview of Technical Aspects and Challenges in Designing Edge-Cloud Systems
2026 (English)In: Applied Sciences, E-ISSN 2076-3417, Vol. 16, no 3, article id 1454Article, review/survey (Refereed) Published
Abstract [en]

Edge-cloud computing has emerged as a key enabling paradigm for augmented and virtual reality (AR/VR) systems because of the stringent computational and ultra-low-latency requirements of AR/VR workloads. Designing efficient edge-cloud systems for such workloads involves multiple technical aspects, including communication technologies, service placement, task offloading and caching, service migration, and security and privacy. This paper provides a structured and technical analysis of these aspects from an AR/VR perspective. We adopt a two-stage literature analysis, in which Google Scholar is used to identify fundamental technical aspects and solution approaches, followed by a focused analysis of recent research trends and future directions using academic databases (e.g., IEEE Xplore, ACM Digital Library, and ScienceDirect). We present an organized classification of the core technical aspects and investigate existing solution approaches, including heuristic, metaheuristic, learning-based, and hybrid strategies. Rather than introducing application-specific designs, the analysis focuses on workload-driven challenges and trade-offs that arise in AR/VR systems. Based on this classification, we analyze recent research trends, identify underexplored technical areas, and highlight key research gaps that hinder the efficient deployment of AR/VR services over edge-cloud infrastructures. The findings of this study provide practical insights for researchers and system designers and help guide future research toward more responsive, scalable, and reliable edge-cloud AR/VR systems.

Place, publisher, year, edition, pages
MDPI, 2026
Keywords
edge-cloud computing, network communication, service placement, offloading, caching, service migration
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-109017 (URN)10.3390/app16031454 (DOI)001687571600001 ()2-s2.0-105030070913 (Scopus ID)
Available from: 2026-03-02 Created: 2026-03-02 Last updated: 2026-03-12Bibliographically approved
Bayram, F., Ahmed, B. S. & Hallin, E. (2026). End-to-end data quality-driven framework for machine learning in production environment. Heliyon, 12(1), Article ID e44416.
Open this publication in new window or tab >>End-to-end data quality-driven framework for machine learning in production environment
2026 (English)In: Heliyon, E-ISSN 2405-8440, Vol. 12, no 1, article id e44416Article in journal (Refereed) Published
Abstract [en]

This paper introduces a novel end-to-end framework that efficiently integrates data quality assessment with machine learning (ML) model operations in real-time production environments. While existing approaches treat data quality assessment and ML systems as isolated processes, our framework addresses the critical gap between theoretical methods and practical implementation by combining dynamic drift detection, adaptive data quality metrics, and MLOps into a cohesive, lightweight system. The key innovation lies in its operational efficiency, enabling real-time, quality-driven ML decision-making with minimal computational overhead. We validate the framework in a steel manufacturing company’s Electroslag Remelting (ESR) vacuum pumping process, demonstrating a 12 % improvement in model performance (R2 = 94 %) and a fourfold reduction in prediction latency. By exploring the impact of data quality acceptability thresholds, we provide actionable insights into balancing data quality standards and predictive performance in industrial applications. This framework represents a significant advancement in MLOps, offering a robust solution for time-sensitive, data-driven decision-making in dynamic industrial environments. 

Place, publisher, year, edition, pages
Elsevier, 2026
Keywords
Data quality, Data-driven AI, Drift detection, Machine learning, MLOps
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-108182 (URN)10.1016/j.heliyon.2025.e44416 (DOI)2-s2.0-105025701238 (Scopus ID)
Available from: 2026-01-13 Created: 2026-01-13 Last updated: 2026-02-12Bibliographically approved
Garshasbi Herabad, M., Taheri, J., Ahmed, B. S. & Curescu, C. (2025). A Lightweight Learning-Based Approach for Online Edge-to-Cloud Service Placement. Electronics, 15(1), Article ID 65.
Open this publication in new window or tab >>A Lightweight Learning-Based Approach for Online Edge-to-Cloud Service Placement
2025 (English)In: Electronics, E-ISSN 2079-9292, Vol. 15, no 1, article id 65Article in journal (Refereed) Published
Abstract [en]

The integration of edge and cloud computing is critical for resource-intensive applications which require low-latency communication, high reliability, and efficient resource utilisation. The service placement problem in these environments poses significant challenges owing to dynamic network conditions, heterogeneous resource availability, and the necessity for real-time decision-making. Because determining an optimal service placement in such networks is an NP-complete problem, the existing solutions rely on fast but suboptimal heuristics or computationally intensive metaheuristics. Neither approach meets the real-time demands of online scenarios, owing to its inefficiency or high computational overhead. In this study, we propose a lightweight learning-based approach for the online placement of services with multi-version components in edge-to-cloud computing. The proposed approach utilises a Shallow Neural Network (SNN) with both weight and power coefficients optimised using a Genetic Algorithm (GA). The use of an SNN ensures low computational overhead during the training phase and almost instant inference when deployed, making it well suited for real-time and online service placement in edge-to-cloud environments where rapid decision-making is crucial. The proposed method (SNN-GA) is specifically evaluated in AR/VR-based remote repair and maintenance scenarios, developed in collaboration with our industrial partner, and demonstrated robust performance and scalability across a wide range of problem sizes. The experimental results show that SNN-GA reduces the service response time by up to 27% compared to metaheuristics and 55% compared to heuristics at larger scales. It also achieves over 95% platform reliability, outperforming heuristics (which remain below 85%) and metaheuristics (which decrease to 90% at larger scales).

Place, publisher, year, edition, pages
MDPI, 2025
Keywords
edge-to-cloud computing, online service placement, neural networks, genetic algorithm
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-108232 (URN)10.3390/electronics15010065 (DOI)001657245900001 ()2-s2.0-105027941715 (Scopus ID)
Available from: 2026-01-19 Created: 2026-01-19 Last updated: 2026-03-04Bibliographically approved
Rahal, M., Ahmed, B. S., Renström, R., Stener, R. & Wurtz, A. (2025). Data-driven heat pump management: combining machine learning with anomaly detection for residential hot water systems. Neural Computing & Applications, 37(21), 16203-16229
Open this publication in new window or tab >>Data-driven heat pump management: combining machine learning with anomaly detection for residential hot water systems
Show others...
2025 (English)In: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 37, no 21, p. 16203-16229Article in journal (Refereed) Published
Abstract [en]

Heat pumps (HPs) have emerged as a cost-effective and clean technology for sustainable energy systems, but their efficiency in producing hot water remains restricted by conventional threshold-based control methods. Although machine learning (ML) has been successfully implemented for various HP applications, optimization of household hot water demand forecasting remains understudied. This paper addresses this problem by introducing a novel approach that combines predictive ML with anomaly detection to create adaptive hot water production strategies based on household-specific consumption patterns. Our key contributions include: (1) a composite approach combining ML and isolation forest (iForest) to forecast household demand for hot water and steer responsive HP operations; (2) multi-step feature selection with advanced time series analysis to capture complex usage patterns; (3) application and tuning of three ML models: light gradient boosting machine (LightGBM), long short-term memory (LSTM), and bidirectional LSTM with the self-attention mechanism on data from different types of real HP installations; and (4) experimental validation on six real household installations. Our experiments show that the best-performing model LightGBM achieves superior performance, with RMSE improvements of up to 9.37% compared to LSTM variants with R2 values between 0.748-0.983. For anomaly detection, our iForest implementation achieved an F1-score of 0.87 with a false alarm rate of only 5.2%, demonstrating strong generalization capabilities across different household types and consumption patterns, making it suitable for real-world HP deployments.

Place, publisher, year, edition, pages
Springer, 2025
Keywords
Feature Selection, Anomaly detection, Deep learning, Demand forecasting, Energy, Heat pumps, Hot water, Isolation forest, Learning with anomalies, Machine-learning, Short term memory, Deep learning
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-104920 (URN)10.1007/s00521-025-11318-y (DOI)2-s2.0-105006993923 (Scopus ID)
Available from: 2025-06-09 Created: 2025-06-09 Last updated: 2026-02-12Bibliographically approved
Rahal, M., Ahmed, B. S., Szabados, G., Fornstedt, T. & Samuelsson, J. (2025). Enhancing machine learning performance through intelligent data quality assessment: An unsupervised data-centric framework. Heliyon, 11(4), Article ID e42777.
Open this publication in new window or tab >>Enhancing machine learning performance through intelligent data quality assessment: An unsupervised data-centric framework
Show others...
2025 (English)In: Heliyon, E-ISSN 2405-8440, Vol. 11, no 4, article id e42777Article in journal (Refereed) Published
Abstract [en]

Poor data quality limits the advantageous power of Machine Learning (ML) and weakens high-performing ML software systems. Nowadays, data are more prone to the risk of poor quality due to their increasing volume and complexity. Therefore, tedious and time-consuming work goes into data preparation and improvement before moving further in the ML pipeline. To address this challenge, we propose an intelligent data-centric evaluation framework that can identify high-quality data and improve the performance of an ML system. The proposed framework combines the curation of quality measurements and unsupervised learning to distinguish high- and low-quality data. The framework is designed to integrate flexible and general-purpose methods so that it is deployed in various domains and applications. To validate the outcomes of the designed framework, we implemented it in a real-world use case from the field of analytical chemistry, where it is tested on three datasets of anti-sense oligonucleotides. A domain expert is consulted to identify the relevant quality measurements and evaluate the outcomes of the framework. The results show that the quality-centric data evaluation framework identifies the characteristics of high-quality data that guide the conduct of efficient laboratory experiments and consequently improve the performance of the ML system. 

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Automated data evaluation, Data quality, Data-centric clustering, Machine learning, Unsupervised learning
National Category
Computer Sciences Computer Systems
Research subject
Computer Science; Chemistry
Identifiers
urn:nbn:se:kau:diva-104062 (URN)10.1016/j.heliyon.2025.e42777 (DOI)2-s2.0-85218987614 (Scopus ID)
Funder
Knowledge Foundation, 20210021
Available from: 2025-04-25 Created: 2025-04-25 Last updated: 2026-02-12Bibliographically approved
Garshasbi Herabad, M., Taheri, J., Ahmed, B. S. & Curescu, C. (2025). E-PSOGA: An Enhanced Hybrid Metaheuristic for Optimal Edge-to-Cloud Placement of Services with Multi-Version Components. IEEE Access, 13, 151170-151188
Open this publication in new window or tab >>E-PSOGA: An Enhanced Hybrid Metaheuristic for Optimal Edge-to-Cloud Placement of Services with Multi-Version Components
2025 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 13, p. 151170-151188Article in journal (Refereed) Published
Abstract [en]

The evolution of edge-to-cloud networks has significantly increased the complexity of determining optimal service placement across these infrastructures, a challenge identified as an NP-complete problem. To address such problems, exact algorithms are impractical at larger scales owing to their computational demands. Heuristics exhibit faster runtimes but lower solution quality, whereas metaheuristics provide high-quality solutions at the cost of increased runtime. In this study, service placement in edge-to-cloud systems is investigated and formulated as an optimisation problem, where each service component is provided by different vendors and is available in multiple versions. The inclusion of multi-version components adds an additional layer of complexity, making the placement problem even more challenging. Specifically, this study addresses the service placement problem in Augmented Reality (AR)- and Virtual Reality (VR)-based remote repair and maintenance use cases, where service response time and system reliability are critical performance metrics. To optimise both metrics, we propose a novel hybrid metaheuristic algorithm (E-PSOGA) which combines the fast convergence of Particle Swarm Optimisation (PSO) with the global search capabilities of Genetic Algorithms (GA). A custom healing operator is also introduced to further enhance the solution quality and reduce the algorithm runtime. A comprehensive performance assessment shows that E-PSOGA reduces the response time by 37% compared with the other implemented baseline algorithms. E-PSOGA achieved 98% platform and 97% service reliability while maintaining a reasonable algorithm runtime. These results indicate that the proposed approach is well-suited for large-scale and time-sensitive scenarios requiring both computational efficiency and high solution quality. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Augmented reality, Complex networks, Computational complexity, Computational efficiency, Heuristic algorithms, Quality of service, Reliability, Repair, Response time (computer systems), Virtual reality, Cloud-computing, Edge-to-cloud computing, Multi-version, Optimal service placement, Particle swarm, Particle swarm optimization, Runtimes, Service placements, Solution quality, Swarm optimization, Genetic algorithms, Particle swarm optimization (PSO)
National Category
Computer Sciences Computer Systems Telecommunications
Research subject
Computer Science; Computer Science
Identifiers
urn:nbn:se:kau:diva-106828 (URN)10.1109/ACCESS.2025.3603329 (DOI)001562596000008 ()2-s2.0-105014473015 (Scopus ID)
Available from: 2025-09-08 Created: 2025-09-08 Last updated: 2026-02-12Bibliographically approved
Jagstedt, S., Magnusson, P., Ahmed, B. S. & Bayram, F. (2025). Exploring AI as a lever for personalised tourism services. In: : . Paper presented at QUIS19 - 19th International Research Symposium on Service Excellence in Management.
Open this publication in new window or tab >>Exploring AI as a lever for personalised tourism services
2025 (English)Conference paper, Oral presentation with published abstract (Refereed)
National Category
Industrial engineering and management
Research subject
Business Administration; Computer Science
Identifiers
urn:nbn:se:kau:diva-106945 (URN)
Conference
QUIS19 - 19th International Research Symposium on Service Excellence in Management
Available from: 2025-09-19 Created: 2025-09-19 Last updated: 2026-02-12Bibliographically approved
Klima, M., Bures, M., Ahmed, B. S., Hindy, H., Bellekens, X. A. & Gargantini, A. (2025). Genetic algorithm for path-based testing of component outage situations in IoT system processes. Applied Soft Computing, 185, Article ID 113854.
Open this publication in new window or tab >>Genetic algorithm for path-based testing of component outage situations in IoT system processes
Show others...
2025 (English)In: Applied Soft Computing, ISSN 1568-4946, E-ISSN 1872-9681, Vol. 185, article id 113854Article in journal (Refereed) Published
Abstract [en]

Component outages often affect IoT system operations and processes. These components can be physical devices, infrastructure parts, or system modules. Among other possible causes, outages are often due to limited or intermittent network connectivity. To ensure reliable operations, connection outage scenarios must be reviewed systematically, which is especially important for critical systems. Path-based testing techniques are preferable for this task, as they sequence events in the system and, therefore, allow to verify the effects of the limited network connectivity on the system processes. Because the available path-based testing techniques provide only a limited ability to solve this problem effectively, in this study, we propose an adaptation of a genetic algorithm to generate specialized test paths from a model that captures the system under test processes. Compared with the four path-based testing alternatives for solving the testing problem, the proposed algorithm yielded the best results in all four defined test set metrics for the two defined test coverage criteria. Regarding the average total length of the test paths, which served as a proxy for testing costs, those produced by the proposed adapted genetic algorithm outperformed the best of the proposed baselines by 23.5% and 29% for individual test coverage criteria. 

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Ability testing, Electron device testing, Model checking, Outages, Problem solving, Statistical tests, Component outage, Model based testing, Network connectivity, Path-based, Path-based testing, System process, Systems operation, Test Automation, Test coverage criteria, Testing technique, Genetic algorithms
National Category
Probability Theory and Statistics
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-107491 (URN)10.1016/j.asoc.2025.113854 (DOI)001594234600001 ()2-s2.0-105018173651 (Scopus ID)
Available from: 2025-11-13 Created: 2025-11-13 Last updated: 2026-02-12Bibliographically approved
Samuelsson, J., Enmark, M., Szabados, G., Rahal, M., Ahmed, B. S., Häggstrom, J., . . . Fornstedt, T. (2025). Improved workflow for constructing machine learning models: Predicting retention times and peak widths in oligonucleotide separation. Journal of Chromatography A, 1747, Article ID 465746.
Open this publication in new window or tab >>Improved workflow for constructing machine learning models: Predicting retention times and peak widths in oligonucleotide separation
Show others...
2025 (English)In: Journal of Chromatography A, ISSN 0021-9673, E-ISSN 1873-3778, Vol. 1747, article id 465746Article in journal (Refereed) Published
Abstract [en]

This study presents an improved workflow to support the development of machine learning models to predict oligonucleotide retention times, peak widths and thus peak resolutions, from larger datasets where manual processing is not feasible. We explored diverse oligonucleotide forms, ranging from native to fully phosphorothioated, using three different gradient slopes. Both native and phosphorothioated oligonucleotides were separated, using a chromatographic C18 system with tributylaminium ion as the ion-pair reagent in the eluent, resulting in retention time data for approximately 900 sequences per gradient. For managing the large and extensive datasets, we developed a semi-automatic rule-based approach for retention time determination, peak decomposition, peak width assessment, signal-to-noise ratio, and skewness analysis. Probability density functions (PDFs) were fitted to elution profiles, with PDF selection based on an Ftest. Co-eluting peaks were addressed using a multiple Gaussian PDF. The encoded sequence data underwent modeling using support vector regression (SVR), gradient boosting (GB), random forest (RF), and decision tree (DT) models. GB and SVR showed promise for retention predictions, while RT and DT were faster but demonstrated limited generalization capabilities. The machine learning models exhibited larger errors for the shallowest gradient and lower predictability for P=O sequences, potentially due to signal intensity and sequence heterogeneity. Improvements in signal-to-noise ratios were considered, including mass spectrometry in selected ion monitoring mode. The best model for this data sets were GB, closely followed by the SVR model. With established models for retention and peak width, chromatograms can now be predicted for various gradient slopes, offering prediction of impurity peak resolution for arbitrary sequences and gradient slopes.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Oligonucleotides, Ion-pair chromatography, Machine learning, Computer simulation, Resolution predictions
National Category
Bioinformatics (Computational Biology) Analytical Chemistry
Research subject
Chemistry; Computer Science
Identifiers
urn:nbn:se:kau:diva-103955 (URN)10.1016/j.chroma.2025.465746 (DOI)001436803200001 ()40014960 (PubMedID)2-s2.0-85218463003 (Scopus ID)
Funder
Knowledge Foundation, 20210021
Available from: 2025-04-11 Created: 2025-04-11 Last updated: 2026-02-12Bibliographically approved
Rahal, M., Ahmed, B. S., Bauer, C. A., Ulander, J. & Samuelsson, J. (2025). Prediction of retention time in larger antisense oligonucleotide datasets using machine learning. Machine Learning with Applications, 21, Article ID 100710.
Open this publication in new window or tab >>Prediction of retention time in larger antisense oligonucleotide datasets using machine learning
Show others...
2025 (English)In: Machine Learning with Applications, E-ISSN 2666-8270, Vol. 21, article id 100710Article in journal (Refereed) Published
Abstract [en]

Antisense oligonucleotides (ASOs) are nucleic acid molecules with transformative therapeutic potential, especially for diseases that are untreatable by traditional drugs. However, the production and purification of ASOs remain challenging due to the presence of unwanted impurities. One tool successfully used to separate an ASO compound from the impurities is ion pair liquid chromatography (IPC). It is a critical step in separation, where each compound is identified by its retention time (tR) in the IPC. Due to the complex sequence-dependent behavior of ASOs and variability in chromatographic conditions, the accurate prediction of tR is a difficult task. This study addresses this challenge by applying machine learning (ML) to predict tR based on the sequence characteristics of ASOs. Four ML models—Gradient Boosting, Random Forest, Decision Tree, and Support Vector Regression — were evaluated on three large ASOs datasets with different gradient times. Through feature engineering and grid search optimization, key predictors were identified and compared for model accuracy using root mean square error, coefficient of determination R-squared, and run time. The results showed that Gradient Boost performance competes with the Support Vector Machine in two of the three datasets, but is 3.94 times faster to tune. Additionally, newly proposed features representing the sulfur count and the nucleotides residing at the first and last positions of a sequence found to improve the predictive power of the models. This study demonstrates the advantages of ML-based tR prediction at scale and provides insights into interpretable and efficient utilization of ML in chromatographic applications.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Machine learning, Comparison analysis, Model optimization, Oligonucleotides, Retention time
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-109137 (URN)10.1016/j.mlwa.2025.100710 (DOI)2-s2.0-105027886082 (Scopus ID)
Available from: 2026-03-06 Created: 2026-03-06 Last updated: 2026-03-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9051-7609

Search in DiVA

Show all publications