Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An Efficient Simulated Annealing-based Task Scheduling Technique for Task Offloading in a Mobile Edge Architecture
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013). (Distributed Systems and Communications Research Group (DISCO))ORCID iD: 0009-0007-3773-5130
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013). (Distributed Systems and Communications Research Group (DISCO))ORCID iD: 0000-0003-4147-9487
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013). (Distributed Systems and Communications Research Group (DISCO))ORCID iD: 0000-0001-9194-010X
2022 (English)In: Proceedings of the 2022 IEEE Conference on Cloud Networking 2022, CloudNet 2022 / [ed] Secci S., Durairajan R., Linguaglossa L., Kamiyama N., Nogueira M., Rovedakis S., Institute of Electrical and Electronics Engineers (IEEE), 2022, p. 159-167Conference paper, Published paper (Refereed)
Abstract [en]

The Internet of Things (IoT) has emerged as a fundamental cornerstone in the digitalization of industry and society. Still, IoT devices’ limited processing and memory capacities pose a problem for conducting complex and time-sensitive computations such as AI-based shop floor monitoring or personalized health tracking on these devices, and offloading to the cloud is not an option due to excessive delays. Edge computing has recently appeared to address the requirements of these IoT applications. This paper formulates the scheduling of tasks between IoT devices, edge servers, and the cloud in a three-layer Mobile Edge Computing (MEC) architecture as a Mixed- Integer Linear Programming (MILP) problem. The paper proposes a simulated annealing-based task scheduling technique and demonstrates that it schedules tasks almost as time-efficient as if the MILP problem had been solved with a mixed integer programming optimization package; however, at a fraction of the cost in terms of CPU, memory, and network resources. Also, the paper demonstrates that the proposed task scheduling technique compares favorably in terms of efficiency, resource consumption, and timeliness with previously proposed techniques based on heuristics, including genetic programming.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022. p. 159-167
Keywords [en]
task offloading, task scheduling, edge/cloud computing, simulated annealing, time sensitivity I.
National Category
Telecommunications Communication Systems
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kau:diva-92092DOI: 10.1109/CloudNet55617.2022.9978900Scopus ID: 2-s2.0-85146121455ISBN: 9781665486279 (print)OAI: oai:DiVA.org:kau-92092DiVA, id: diva2:1700483
Conference
11th IEEE International Conference on Cloud Networking, (CloudNet), Paris, France, November 7-10, 2022.
Available from: 2022-10-01 Created: 2022-10-01 Last updated: 2026-02-12Bibliographically approved
In thesis
1. Offline Task Scheduling in a Three-layer Edge-Cloud Architecture
Open this publication in new window or tab >>Offline Task Scheduling in a Three-layer Edge-Cloud Architecture
2023 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Internet of Things (IoT) devices are increasingly being used everywhere, from the factory to the hospital to the house to the car. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. Thus, many obstacles need to be overcome while offloading tasks to the cloud. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes. 

In this thesis, we model the offloading problem in an edge cloud infrastructure as a Mixed-Integer Linear Programming (MILP) problem and look for efficient optimization techniques to tackle it, aiming to minimize the total delay of the system after completing all tasks of all services requested by all users. To accomplish this, we use the exact approaches like simplex to find a solution to the MILP problem. Due to the fact that precise techniques, such as simplex, require a large number of processing resources and a considerable amount of time to solve the problem, we propose several heuristics and meta-heuristics methods to solve the problem and use the simplex findings as a benchmark to evaluate these methods. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results. Meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems. In order to solve this issue, we propose two meta-heuristic approaches, one based on a genetic algorithm and the other on simulated annealing. Compared to heuristics algorithms, the genetic algorithm-based method yields a more accurate solution, but it requires more time and resources to solve the MILP, while the simulated annealing-based method is a better fit for the problem since it produces more accurate solutions in less time than the genetics-based method.

Abstract [en]

Internet of Things (IoT) devices are increasingly being used everywhere. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes. 

In this thesis, the offloading problem in an edge cloud infrastructure is modeled as a Mixed-Integer Linear Programming (MILP) problem, and efficient optimization techniques seeking to minimize the total delay of the system are employed to address it. To accomplish this, the exact approaches are used to find a solution to the MILP problem. Due to the fact that precise techniques require a large number of processing resources and a considerable amount of time to solve the problem, several heuristics and meta-heuristics methods are proposed. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results while meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems.

Place, publisher, year, edition, pages
Karlstads universitet, 2023. p. 23
Series
Karlstad University Studies, ISSN 1403-8099 ; 2023:16
Keywords
task offloading, task scheduling, edge computing, internet of things, optimization techniques, heuristics, meta-heuristics
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-94438 (URN)978-91-7867-374-2 (ISBN)978-91-7867-375-9 (ISBN)
Presentation
2023-06-05, Eva Erikssonsalen, 21A 342, Karlstad, 13:00 (English)
Opponent
Supervisors
Available from: 2023-05-15 Created: 2023-04-26 Last updated: 2026-02-12Bibliographically approved
2. Task Scheduling and Offloading in IoT–Edge–Cloud Systems: From Offline Optimization to Online Learning
Open this publication in new window or tab >>Task Scheduling and Offloading in IoT–Edge–Cloud Systems: From Offline Optimization to Online Learning
2026 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The Internet of Things (IoT) devices are increasingly deployed in environments such as factories, hospitals, homes, and vehicles. Due to limited processing capabilities, these devices often rely on cloud servers for task execution. However, cloud offloading introduces major challenges, including high data transmission overhead, network congestion, increased delays, and elevated end-to-end latency, particularly for latency-sensitive and data-intensive applications such as industrial control, smart city analytics, and healthcare monitoring. Edge computing mitigates these issues by moving computation closer to end users. Given the limited resources at edge nodes, efficient task scheduling across IoT devices, edge servers, and cloud nodes is essential to meet application delay requirements.

This thesis presents a framework for adaptive and delay-efficient task scheduling across the device-edge-cloud continuum, addressing both offline optimization and online learning. The scheduling problem is formulated as a Mixed-Integer Linear Program (MILP) to minimize end-to-end service delay. While exact optimization using CPLEX provides benchmark solutions, it becomes computationally prohibitive at scale. To enable scalability, heuristics and two meta-heuristic approaches, based on genetic algorithm (GA) and simulated annealing (SA), are developed. GA-based method improves solution quality but incurs higher runtime, whereas SA-based method achieves near-optimal solutions with substantially lower computational cost. Building on this foundation, two online schedulers are proposed for time-varying workloads and partial system knowledge. One extends simulated annealing for online hierarchical multi-access edge computing, while the other employs a cooperative multi-agent reinforcement learning framework using Deep Q-Networks and Double DQN with decentralized execution. Simulation results show that the proposed methods reduce average latency and improve deadline satisfaction compared to state-of-the-art baselines, demonstrating their effectiveness for scalable, low-latency IoT service scheduling.

Abstract [en]

Internet of Things (IoT) devices are widely deployed in environments such as factories, hospitals, homes, and vehicles. Due to limited processing capabilities, these devices often offload tasks to the cloud, which can cause high communication overhead, network congestion, and increased end-to-end latency, particularly for latency-sensitive applications such as industrial control, smart city analytics, and healthcare monitoring. Edge computing alleviates these issues by bringing computation closer to end users, but its limited resources require efficient task scheduling across device, edge, and cloud tiers.

This thesis proposes an adaptive and delay-efficient scheduling framework for the device-edge-cloud continuum. The problem is first formulated as a Mixed-Integer Linear Program to minimize service delay, with CPLEX used to obtain benchmark solutions. To improve scalability, heuristic methods and two meta-heuristic approaches based on genetic algorithm (GA) and simulated annealing (SA), are developed, where SA method achieves near-optimal performance with significantly lower computational cost. Building on this, two online schedulers are introduced: one extending SA for online hierarchical edge computing, and another based on multi-agent reinforcement learning using Deep Q-Networks and Double DQN. Simulation results show notable latency reduction and improved deadline satisfaction compared to state-of-the-art baselines, demonstrating the effectiveness of the proposed approach.

Place, publisher, year, edition, pages
Karlstad: Karlstads universitet, 2026. p. 50
Series
Karlstad University Studies, ISSN 1403-8099 ; 2026:9
Keywords
task offloading, task scheduling, edge computing, optimization techniques, heuristics, meta-heuristics, reinforcement learning
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-108215 (URN)10.59217/sbpt5891 (DOI)978-91-7867-662-0 (ISBN)978-91-7867-663-7 (ISBN)
Public defence
2026-03-18, Agardh lecture hall, 11D257, Karlstad universitet, Karstad, 13:15 (English)
Opponent
Supervisors
Available from: 2026-02-18 Created: 2026-01-15 Last updated: 2026-02-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Mahjoubi, AyehGrinnemo, Karl-JohanTaheri, Javid

Search in DiVA

By author/editor
Mahjoubi, AyehGrinnemo, Karl-JohanTaheri, Javid
By organisation
Department of Mathematics and Computer Science (from 2013)
TelecommunicationsCommunication Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 512 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf