Change search
Link to record
Permanent link

Direct link
Publications (10 of 188) Show all publications
Khah, Y. P., Shirvani, M. H. & Taheri, J. (2026). A survey study on meta-heuristic-based feature selection approaches of intrusion detection systems in distributed networks. Computer Standards & Interfaces, 96, Article ID 104074.
Open this publication in new window or tab >>A survey study on meta-heuristic-based feature selection approaches of intrusion detection systems in distributed networks
2026 (English)In: Computer Standards & Interfaces, ISSN 0920-5489, E-ISSN 1872-7018, Vol. 96, article id 104074Article in journal (Refereed) Published
Abstract [en]

With the emergence of IoT and expanding the coverage of distributed networks such as cloud and fog, security attacks and breaches are becoming distributed and expanded too. Cybersecurity attacks can disrupt business continuity or expose critical data, leading to significant failures. The Intrusion Detection Systems (IDSs) as a remedy in such networks play a critical role in this ecosystem to find an attack at the earliest time and the countermeasure is performed if necessary. Artificial intelligence techniques such as machine learning-based and meta-heuristic-based approaches are being pervasively applied to prepare smarter IDS components from logged network traffic. The network traffic is recorded in the form of data sets for further analysis to detect traffic behavior from past treatments. Feature selection is a prominent approach in creating the prediction model to recognize feature network connection is normal or not. Since the feature selection problem in large datasets is NP-Hard and utilizing only heuristic-based approaches is not as efficient as desired, meta-heuristic-based approaches attract research attention to prepare highly accurate prediction models. To address the issue, this paper presents a subjective classification of published literature. Then, this presents a survey study on meta-heuristic-based feature selection approaches in preparing efficient IDSs. It investigates several kinds of literature from different angles and compares them in terms of used metrics in the literature to give broad insights into readers for advantages, challenges, and limitations. It can pave the way by highlighting research gaps for further processing and improvement in the future by interested researchers in the field.

Place, publisher, year, edition, pages
Elsevier, 2026
Keywords
Intrusion detection system (IDS), Fog computing, Feature selection, Metaheuristic algorithms, Network security
National Category
Computer Sciences Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-107348 (URN)10.1016/j.csi.2025.104074 (DOI)001589093600001 ()2-s2.0-105017588599 (Scopus ID)
Available from: 2025-10-21 Created: 2025-10-21 Last updated: 2025-10-21Bibliographically approved
HoseinyFarahabady, M. R., Taheri, J. & Zomaya, A. Y. .. (2025). Accelerating Key-Value Data Structures Using AVX-512 SIMD Extensions. In: Proceedings - IEEE International Conference on Cluster Computing: . Paper presented at IEEE International Conference on Cluster Computing (CLUSTER), Edinburgh, United Kingdom, September 2-5, 2025.. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Accelerating Key-Value Data Structures Using AVX-512 SIMD Extensions
2025 (English)In: Proceedings - IEEE International Conference on Cluster Computing, Institute of Electrical and Electronics Engineers (IEEE), 2025Conference paper, Published paper (Refereed)
Abstract [en]

Advanced Vector Extensions 512 (AVX-512), a modern SIMD instruction set for x86 architectures, enables data-level parallelism through 512-bit wide ZMM registers capable of processing multiple data elements concurrently within a single instruction cycle. In this study, we present a high-throughput, lock-free, in-memory architecture for key-value data-stores that exploits AVX-512 vector operations to accelerate fundamental operations such as insertion and lookup. Our design introduces an optimized memory layout that partitions the key space into two disjoint regions (primary and secondary) and employs three independent hash functions to identify candidate slots. This asymmetric layout improves key distribution, reduces collision probability, and enhances overall lookup efficiency. Experimental evaluation shows that this strategy yields the lowest insertion failure rate among tested memory partitioning schemes. By leveraging AVX-512 instructions in combination with most optimized memory layout, our implementation achieves insertion throughput within 6% of Intel TBB’s highly optimized multithreaded hash map, despite avoiding explicit synchronization or thread-level parallelism. Under workloads with 550 million entries and a 90% miss rate, our approach delivers 4.0-5.1x speedup over standard STL, Boost, Robin-Hood, and Abseil hash maps, and up to 2.5 x improvement relative to TBB and Abseil. These gains are consistently observed for both 32-bit and 64-bit floating-point key types. The results confirm the viability of AVX-512-centric designs as a cost-effective alternative to thread-level parallelism, particularly in environments where minimizing synchronization overhead and ensuring deterministic execution are critical. Our findings suggest for a paradigm shift in CPU and system architecture, emphasizing wider vector units and improved memory bandwidth utilization as primary levers for scalable high-performance computing. These findings suggest that future extensions of AVX-512 capabilities, such as non-blocking memory loads, expanded vector registers, and asynchronous prefetching, could enhance the efficiency of data-intensive workloads. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Data structures, Digital arithmetic, Hash functions, Memory architecture, Multitasking, Program processors, Table lookup, Throughput, Vectors, Advanced vector extension 512 intrinsic, CPU-based key-value data structure, Data access, Hash table, Hash table acceleration, High performance computing, Key values, Layout designs, Low latency, Low-latency data access, Memory layout, Memory layout design, Multiple data, Multiple data (SIMD) parallelism, Performance computing, Single instruction, Value data, Vectorized hashing, Failure analysis
National Category
Computer Systems Computer Engineering Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-107584 (URN)10.1109/CLUSTER59342.2025.11186494 (DOI)2-s2.0-105019791230 (Scopus ID)979-8-3315-3019-8 (ISBN)979-8-3315-3020-4 (ISBN)
Conference
IEEE International Conference on Cluster Computing (CLUSTER), Edinburgh, United Kingdom, September 2-5, 2025.
Funder
Knowledge Foundation
Available from: 2025-11-18 Created: 2025-11-18 Last updated: 2025-11-18Bibliographically approved
Garshasbi Herabad, M., Taheri, J., Ahmed, B. S. & Curescu, C. (2025). E-PSOGA: An Enhanced Hybrid Metaheuristic for Optimal Edge-to-Cloud Placement of Services with Multi-Version Components. IEEE Access, 13, 151170-151188
Open this publication in new window or tab >>E-PSOGA: An Enhanced Hybrid Metaheuristic for Optimal Edge-to-Cloud Placement of Services with Multi-Version Components
2025 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 13, p. 151170-151188Article in journal (Refereed) Published
Abstract [en]

The evolution of edge-to-cloud networks has significantly increased the complexity of determining optimal service placement across these infrastructures, a challenge identified as an NP-complete problem. To address such problems, exact algorithms are impractical at larger scales owing to their computational demands. Heuristics exhibit faster runtimes but lower solution quality, whereas metaheuristics provide high-quality solutions at the cost of increased runtime. In this study, service placement in edge-to-cloud systems is investigated and formulated as an optimisation problem, where each service component is provided by different vendors and is available in multiple versions. The inclusion of multi-version components adds an additional layer of complexity, making the placement problem even more challenging. Specifically, this study addresses the service placement problem in Augmented Reality (AR)- and Virtual Reality (VR)-based remote repair and maintenance use cases, where service response time and system reliability are critical performance metrics. To optimise both metrics, we propose a novel hybrid metaheuristic algorithm (E-PSOGA) which combines the fast convergence of Particle Swarm Optimisation (PSO) with the global search capabilities of Genetic Algorithms (GA). A custom healing operator is also introduced to further enhance the solution quality and reduce the algorithm runtime. A comprehensive performance assessment shows that E-PSOGA reduces the response time by 37% compared with the other implemented baseline algorithms. E-PSOGA achieved 98% platform and 97% service reliability while maintaining a reasonable algorithm runtime. These results indicate that the proposed approach is well-suited for large-scale and time-sensitive scenarios requiring both computational efficiency and high solution quality. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Augmented reality, Complex networks, Computational complexity, Computational efficiency, Heuristic algorithms, Quality of service, Reliability, Repair, Response time (computer systems), Virtual reality, Cloud-computing, Edge-to-cloud computing, Multi-version, Optimal service placement, Particle swarm, Particle swarm optimization, Runtimes, Service placements, Solution quality, Swarm optimization, Genetic algorithms, Particle swarm optimization (PSO)
National Category
Computer Sciences Computer Systems Telecommunications
Research subject
Computer Science; Computer Science
Identifiers
urn:nbn:se:kau:diva-106828 (URN)10.1109/ACCESS.2025.3603329 (DOI)001562596000008 ()2-s2.0-105014473015 (Scopus ID)
Available from: 2025-09-08 Created: 2025-09-08 Last updated: 2025-10-16Bibliographically approved
Taghinezhad-Niar, A. & Taheri, J. (2025). Fault-Tolerant Cost-Efficient Scheduling for Energy and Deadline-Constrained IoT Workflows in Edge-Cloud Continuum. IEEE Transactions on Services Computing, 18(5), 2892-2903
Open this publication in new window or tab >>Fault-Tolerant Cost-Efficient Scheduling for Energy and Deadline-Constrained IoT Workflows in Edge-Cloud Continuum
2025 (English)In: IEEE Transactions on Services Computing, E-ISSN 1939-1374, Vol. 18, no 5, p. 2892-2903Article in journal (Refereed) Published
Abstract [en]

Edge computing brings computation closer to Internet-of-Things (IoT) data sources, reducing latency but increasing energy consumption and susceptibility to node failures. The cloud platform provides extensive computational capabilities, but comes with significant costs and communication delays due to network congestion. The edge-cloud continuum strategically combines these approaches to mitigate their individual drawbacks. However, effectively scheduling IoT workflows to minimize costs while adhering to strict requirements for latency, energy efficiency, and reliability remains a major challenge in real-time IoT applications. To address these challenges, we propose the Reliable Energy-constrained Cost-aware Real-time (RECR) algorithm for optimizing IoT workflow scheduling across the edge-cloud continuum. RECR minimizes monetary costs and enhances reliability while adhering to strict energy and deadline constraints. We also introduce RECR-D, a fault-tolerant extension that employs adaptive task duplication to manage transient and permanent failures, with reliability rigorously modeled using Continuous-Time Markov Chains (CTMCs) to integrate dynamic failure behavior. Extensive simulations demonstrate that RECR reduces workflow monetary costs by approximately 21% and improves deadline adherence by 37% compared to state-of-the-art algorithms. Furthermore, RECR-D improves compliance with reliability and energy constraints by 27% and by up to 208%, respectively, highlighting its robust performance in dynamic, failure-prone environments. These contributions significantly advance workflow management for IoT applications, proving crucial for real-time traffic control and video analytics in smart cities, ensuring timely processing and lower costs. They are also vital for remote patient monitoring and medical imaging analysis in healthcare, improving reliability and meeting deadlines for patient safety.

Place, publisher, year, edition, pages
IEEE, 2025
Keywords
Computational cost, Constrained optimization, Continuous time systems, Costs, Edge computing, Fault tolerance, Green computing, Markov processes, Medical imaging, Scheduling algorithms, Traffic congestion, Workflow management, Cost-aware, Edge, Edge clouds, Energy-constrained, Fault-tolerant, Latency, Real- time, Reliable energy, Work-flows, Workflow scheduling, Clouds
National Category
Computer Sciences Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-106747 (URN)10.1109/TSC.2025.3599497 (DOI)001591693600029 ()2-s2.0-105013296423 (Scopus ID)
Available from: 2025-09-03 Created: 2025-09-03 Last updated: 2025-11-28Bibliographically approved
Mahmoudi, A., Farzinvash, L. & Taheri, J. (2025). GPTOR: Gridded GA and PSO-based task offloading and ordering in IoT-edge-cloud computing. Results in Engineering (RINENG), 25, Article ID 104196.
Open this publication in new window or tab >>GPTOR: Gridded GA and PSO-based task offloading and ordering in IoT-edge-cloud computing
2025 (English)In: Results in Engineering (RINENG), ISSN 2590-1230, Vol. 25, article id 104196Article in journal (Refereed) Published
Abstract [en]

Edge computing is a key technology that provides computational resources close to IoT devices. One of the primary challenges in edge computing is determining whether to execute computation-intensive and time-sensitive tasks locally, or to offload them to edge and cloud computing resources, as well as to order them for execution according to their deadlines. Various offloading algorithms have been proposed for these systems, each with its own advantages and disadvantages. Several studies did not exploit all the IoT, edge, and cloud layers, whereas others only considered a few criteria for decision making on task offloading. Other approaches used greedy methods that could not provide high-quality solutions or employed standard optimization algorithms, which took a long time to converge. In this study, we propose an improved genetic algorithm for joint task offloading and ordering to distribute tasks across the IoT, edge, and cloud layers. It includes a novel population initialization scheme that uses various methods, including particle swarm optimization. To increase the convergence speed, the proposed algorithm (GPTOR) splits the solution space into several areas, which is called gridding. The simulation results illustrate that our algorithm outperforms previous schemes by 41.07%, 26.25%, and 28.33% in terms of average delay, monetary cost, and energy consumption, respectively. 

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Cloud platforms, Computation offloading, Mobile edge computing, Optimization algorithms, Particle swarm optimization (PSO), Cloud layers, Cloud-computing, Customized operator, Edge computing, Gridding, Particle swarm, Particle swarm optimization, Swarm optimization, Task offloading, Task orders, Genetic algorithms
National Category
Computer Sciences Communication Systems Robotics and automation
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-103403 (URN)10.1016/j.rineng.2025.104196 (DOI)001423833600001 ()2-s2.0-85216897603 (Scopus ID)
Available from: 2025-02-25 Created: 2025-02-25 Last updated: 2025-10-16Bibliographically approved
Molaei, S., Sabaei, M. & Taheri, J. (2025). MRM-PSO: An enhanced particle swarm optimization technique for resource management in highly dynamic edge computing environments. Ad hoc networks, 178, Article ID 103952.
Open this publication in new window or tab >>MRM-PSO: An enhanced particle swarm optimization technique for resource management in highly dynamic edge computing environments
2025 (English)In: Ad hoc networks, ISSN 1570-8705, E-ISSN 1570-8713, Vol. 178, article id 103952Article in journal (Refereed) Published
Abstract [en]

The resource constraints of Internet of Things (IoT) devices pose significant hurdles to delay-sensitive applications that operate in dynamic and wireless settings. Since offloading tasks to cloud servers can be hindered by security concerns and latency issues, edge and fog computing bring computation closer to data sources. Given their inherently distributed and resource-constrained nature, edge/fog-enabled platforms require more advanced resource-management solutions to address the numerous constraints encountered in dynamic and wireless environments. This study introduces an innovative resource management algorithm designed for dynamic edge/fog computing environments, tailored to real-world applications, with the objective of enhancing delay performance through optimal container placement. The resource management problem incorporates mobility patterns in wireless settings to reduce migration delay and the processing history of edge/fog nodes to provide a novel method for computing processing delay, resulting in a combined optimization problem expressed in an integer linear programming (ILP) format. To address the formulated NP-Hard problem, we developed a low-complexity Metaheuristic Resource Management algorithm based on Particle Swarm Optimization (MRM-PSO) with effective particle modelling. Our experimental findings demonstrate that greedy heuristics and genetic algorithm (GA) are inadequate for efficiently resolving a given problem, whereas our proposed MRM-PSO algorithm efficiently locates near-optimal solutions within reasonable execution times when compared to exact solvers. MRM-PSO reduces execution time by up to 663.82 % in the worst case and 2307.5 % in the best case. Furthermore, it attains a delay that is just 0.98 % higher in the best case and 5.54 % higher in the worst case compared to the optimal solution.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Edge/fog computing, resource management, container placement, optimization, Particle Swarm Optimization (PSO)
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-105895 (URN)10.1016/j.adhoc.2025.103952 (DOI)001511785200002 ()2-s2.0-105007971256 (Scopus ID)
Available from: 2025-06-26 Created: 2025-06-26 Last updated: 2025-10-16Bibliographically approved
Jagannathan, S., Sharma, Y. & Taheri, J. (2025). Towards Generic Failure-Prediction Models in Large-Scale Distributed Computing Systems. Electronics, 14(17), Article ID 3386.
Open this publication in new window or tab >>Towards Generic Failure-Prediction Models in Large-Scale Distributed Computing Systems
2025 (English)In: Electronics, E-ISSN 2079-9292, Vol. 14, no 17, article id 3386Article in journal (Refereed) Published
Abstract [en]

The increasing complexity of Distributed Computing (DC) systems requires advanced failure-prediction models to enhance reliability and efficiency. This study proposes a comprehensive methodology for developing generic machine learning (ML) models capable of cross-layer and cross-platform failure-prediction without requiring platform-specific retraining. Using the Grid5000 failure dataset from the Failure Trace Archive (FTA), we explored Linear and Logistic Regression, Random Forest, and XGBoost to predict three critical metrics: Time Between Failures (TBF), Time to Return/Repair (TTR), and Failing Node Identification (FNI). Our approach involved extensive exploratory data analysis (EDA), statistical examination of failure patterns, and model evaluation across the cluster, site, and system levels. The results demonstrate that XGBoost consistently outperforms the other models, achieving near-perfect 100% accuracy for TBF and FNI, with robust generalisability across diverse DC environments. In addition, we introduce a hierarchical DC architecture that integrates these failure-prediction models. In the form of a use case, we also demonstrate how service providers can use these prediction models to balance service reliability and cost.

Place, publisher, year, edition, pages
MDPI, 2025
Keywords
distributed computing, fault detection, machine learning algorithms, prediction algorithms, performance evaluation
National Category
Computer Sciences Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-107038 (URN)10.3390/electronics14173386 (DOI)001569639800001 ()2-s2.0-105016627350 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2025-09-26 Created: 2025-09-26 Last updated: 2025-11-03Bibliographically approved
Xiang, Z., Zheng, Y., Wang, D., Taheri, J., Zheng, Z. & Guo, M. (2024). Cost-Effective and Robust Service Provisioning in Multi-Access Edge Computing. IEEE Transactions on Parallel and Distributed Systems, 35(10), 1765-1779
Open this publication in new window or tab >>Cost-Effective and Robust Service Provisioning in Multi-Access Edge Computing
Show others...
2024 (English)In: IEEE Transactions on Parallel and Distributed Systems, ISSN 1045-9219, E-ISSN 1558-2183, Vol. 35, no 10, p. 1765-1779Article in journal (Refereed) Published
Abstract [en]

With the development of multiaccess edge computing (MEC) technology, an increasing number of researchers and developers are deploying their computation-intensive and IO-intensive services (especially AI services) on edge devices. These devices, being close to end users, provide better performance in mobile environments. By constructing a service provisioning system at the network edge, latency is significantly reduced due to short-distance communication with edge servers. However, since the MEC-based service provisioning system is resource-sensitive and the network may be unstable, careful resource allocation and traffic scheduling strategies are essential. This paper investigates and quantifies the cost-effectiveness and robustness of the MEC-based service provisioning system with the applied resource allocation and traffic scheduling strategies. Based on this analysis, a cost-effective and robust service provisioning algorithm, termed CERA, is proposed to minimize deployment costs while maintaining system robustness. Extensive experiments are conducted to compare the proposed approach with well-known baseline algorithms and evaluate factors impacting the results. The findings demonstrate that CERA achieves at least 15.9% better performance than other baseline algorithms across various instances.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Servers, Resource management, Costs, Robustness, Artificial intelligence, Power system protection, Power system faults, Edge computing, resource allocation, service computing, traffic scheduling
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-101622 (URN)10.1109/TPDS.2024.3435929 (DOI)001291895800002 ()2-s2.0-85200245270 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2024-09-13 Created: 2024-09-13 Last updated: 2025-10-16Bibliographically approved
Gokan Khan, M., Taheri, J., Kassler, A. & Boodaghian Asl, A. (2024). Graph Attention Networks and Deep Q-Learning for Service Mesh Optimization: A Digital Twinning Approach. In: Valenti M., Reed D., Torres M. (Ed.), Proceedings- IEEE International Conference on Communications: . Paper presented at IEEE International Conference on Communications (ICC), Denver, USA, June 9-13, 2024. (pp. 2913-2918). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Graph Attention Networks and Deep Q-Learning for Service Mesh Optimization: A Digital Twinning Approach
2024 (English)In: Proceedings- IEEE International Conference on Communications / [ed] Valenti M., Reed D., Torres M., Institute of Electrical and Electronics Engineers (IEEE), 2024, p. 2913-2918Conference paper, Published paper (Refereed)
Abstract [en]

In the realm of cloud native environments, Ku-bernetes has emerged as the de facto orchestration system for containers, and the service mesh architecture, with its interconnected microservices, has become increasingly prominent. Efficient scheduling and resource allocation for these microservices play a pivotal role in achieving high performance and maintaining system reliability. In this paper, we introduce a novel approach for container scheduling within Kubernetes clusters, leveraging Graph Attention Networks (GATs) for representation learning. Our proposed method captures the intricate dependencies among containers and services by constructing a representation graph. The deep Q-learning algorithm is then employed to optimize scheduling decisions, focusing on container-to-node placements, CPU request-response allocation, and adherence to node affinity and anti-affinity rules. Our experiments demonstrate that our GATs-based method outperforms traditional scheduling strategies, leading to enhanced resource utilization, reduced service latency, and improved overall system throughput. The insights gleaned from this study pave the way for a new frontier in cloud native performance optimization and offer tangible benefits to industries adopting microservice-based architectures.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
component, formatting, insert, style, styling
National Category
Computer Sciences Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-97430 (URN)10.1109/ICC51166.2024.10622616 (DOI)2-s2.0-85202817543 (Scopus ID)978-1-7281-9055-6 (ISBN)978-1-7281-9054-9 (ISBN)
Conference
IEEE International Conference on Communications (ICC), Denver, USA, June 9-13, 2024.
Note

This article was included as a manuscript in the doctoral thesis entitled "Unchaining Microservice Chains: Machine Learning Driven Optimization in Cloud Native Systems" KUS 2023:35.

Available from: 2023-11-20 Created: 2023-11-20 Last updated: 2025-10-16Bibliographically approved
Galletta, A., Taheri, J., Celesti, A., Fazio, M. & Villari, M. (2024). Investigating the Applicability of Nested Secret Share for Drone Fleet Photo Storage. IEEE Transactions on Mobile Computing, 23(4), 2671-2683
Open this publication in new window or tab >>Investigating the Applicability of Nested Secret Share for Drone Fleet Photo Storage
Show others...
2024 (English)In: IEEE Transactions on Mobile Computing, ISSN 1536-1233, E-ISSN 1558-0660, Vol. 23, no 4, p. 2671-2683Article in journal (Refereed) Published
Abstract [en]

Military drones can be used for surveillance or spying on enemies. They, however, can be either destroyed or captured, therefore photos contained inside them can be lost or revealed to the attacker. A possible solution to solve such a problem is to adopt Secret Share (SS) techniques to split photos into several sections/chunks and distribute them among a fleet of drones. The advantages of using such a technique are two folds. First, no single drone contains any photo in its entirety; thus even when a drone is captured, the attacker cannot discover any photos. Second, the storage requirements of drones can be simplified, and thus cheaper drones can be produced for such missions. In this scenario, a fleet of drones consists of t+r drones, where t (threshold) is the minimum number of drones required to reconstruct the photos, and r (redundancy) is the maximum number of lost drones the system can tolerate. The optimal configuration of t+r is a formidable task. This configuration is typically rigid and hard to modify in order to fit the requirements of specific missions. In this work, we addressed such an issue and proposed the adoption of a flexible Nested Secret Share (NSS) technique. In our experiments, we compared two of the major SS algorithms (Shamir's schema and the Redundant Residue Number System (RRNS)) with their Two-Level NSS (2NSS) variants to store/retrieve photos. Results showed that Redundant Residue Number System (RRNS) is more suitable for a drone fleet scenario.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Drones, Cryptography, Mobile computing, Cloud computing, Base stations, Task analysis, Storage management, secret share algorithms, nested secret share algorithms, redundant residue number system, Shamir schema
National Category
Computer and Information Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-99491 (URN)10.1109/TMC.2023.3263115 (DOI)001181480700030 ()
Available from: 2024-04-26 Created: 2024-04-26 Last updated: 2025-10-16Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9194-010X

Search in DiVA

Show all publications