A Mobile ad hoc network is a collection of wireless nodes that dynamically organize themselves to form a network without the need for any fixed infrastructure or centralized administration. The network topology dynamically changes frequently in an unpredictable manner since nodes are free to move. Support for multicasting is essential in such environment as it is considered to be an efficient way to deliver information from source nodes to many client nodes. A problem with multicast routing algorithms is their efficiency as their forwarding structure determines the overall network resource consumption makes them significantly less efficient than unicast routing algorithms. In this research, we improve the performance of the popular ODMRP multicast routing protocol by restricting the domain of join query packets, which have been lost. This is achieved by augmenting the join query packets with minimum extra information (one field), which denotes the number of the visited node from previous forwarding group. Simulation results show, that our mechanisms significantly reduce the control traffic and thus the overall latency and power consumption in the network.
Cloud computing is gaining significant traction and virtualized data centers are becoming popular as a cost-effective infrastructure in telecommunication industry. Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) are being widely deployed and utilized by end users, including many private as well as public organizations. Despite its wide-spread acceptance, security is still the biggest threat in cloud computing environments. Users of cloud services are under constant fear of data loss, security breaches, information theft and availability issues. Recently, learning-based methods for security applications are gaining popularity in the literature with the advents in machine learning (ML) techniques. In this work, we explore applicability of two well-known machine learning approaches, which are, Artificial Neural Networks (ANN) and Support Vector Machines (SVM), to detect intrusions or anomalous behavior in the cloud environment. We have developed ML models using ANN and SVM techniques and have compared their performances. We have used UNSW-NB-15 dataset to train and test the models. In addition, we have performed feature engineering and parameter tuning to find out optimal set of features with maximum accuracy to reduce the training time and complexity of the ML models. We observe that with proper features set, SVM and ANN techniques have been able to achieve anomaly detection accuracy of 91% and 92% respectively, which is higher compared against that of the one achieved in the literature, with reduced number of features needed to train the models.
With the advance of fifth generation (5G) networks, network density needs to grow significantly in order to meet the required capacity demands. A massive deployment of small cells may lead to a high cost for providing. ber connectivity to each node. Consequently, many small cells are expected to be connected through wireless links to the umbrella eNodeB, leading to a mesh backhaul topology. This backhaul solution will most probably be composed of high capacity point-to-point links, typically operating in the millimeter wave (mmWave) frequency band due to its massive bandwidth availability. In this paper, we propose a mathematical model that jointly solves the user association and backhaul routing problem in the aforementioned context, aiming at the energy efficiency maximization of the network. Our study considers the energy consumption of both the access and backhaul links, while taking into account the capacity constraints of all the nodes as well as the fulfillment of the service-level agreements (SLAs). Due to the high complexity of the optimal solution, we also propose an energy efficient heuristic algorithm (Joint), which solves the discussed joint problem, while inducing low complexity in the system. We numerically evaluate the algorithm performance by comparing it not only with the optimal solution but also with reference approaches under different traffic load scenarios and backhaul parameters. Our results demonstrate that Joint outperforms the state-of-the-art, while being able to find good solutions, close to optimal, in short time.
Cloud computing offers a wide range of services through a pool of heterogeneous Physical Machines (PMs) hosted on cloud data centers, where each PM can host several Virtual Machines (VMs). Resource sharing among VMs comes with major benefits, but it can create technical challenges that have a detrimental effect on the performance. To ensure a specific service level requested by the cloud-based applications, there is a need for an approach to assign adequate resources to each VM. To this end, we present our novel Multi-Loop Control approach, called MultiScaler , to allocate resources to VMs based on the Service Level Agreement (SLA) requirements and the run-time conditions. MultiScaler is mainly composed of three different levels working closely with each other to achieve an optimal resource allocation. We propose a set of tailor-made controllers to monitor VMs and take actions accordingly to regulate contention among collocated VMs, to reallocate resources if required, and to migrate VMs from one PM to another. The evaluation in a VMware cluster have shown that the MultiScaler approach can meet applications performance goals and guarantee the SLA by assigning the exact resources that the applications require. Compared with sophisticated baselines, MultiScaler produces significantly better reaction to changes in workloads even under the presence of noisy neighbors.
Utilizing multiple access technologies such as 5G,4G, and Wi-Fi within a coherent framework is currentlystandardized by 3GPP within 5G ATSSS. Indeed, distributingpackets over multiple networks can lead to increased robustness,resiliency and capacity. A key part of such a framework isthe multi-access proxy, which transparently distributes packetsover multiple paths. As the proxy needs to serve thousands ofcustomers, scalability and performance are crucial for operatordeployments. In this paper, we leverage recent advancementsin data plane programming, implement a multi-access proxybased on the MP-DCCP tunneling approach in P4 and hardwareaccelerate it by deploying the pipeline on a smartNIC. Thisis challenging due to the complex scheduling and congestioncontrol operations involved. We present our pipeline and datastructures design for congestion control and packet schedulingstate management. Initial measurements in our testbed showthat packet latency is in the range of 25 μs demonstrating thefeasibility of our approach.
Future network environments will be heterogeneous and mobile terminals will have the opportunity to dynamically select among many different access technologies. Therefore, it is important to provide service continuity in case of vertical handovers when terminals change the access technology. Two important wireless access technologies are WLAN (Wireless Local Access Networks) and WMAN (Wireless Metropolitan Access Networks) networks. In this paper, we address several challenges related to a seamless integration of those technologies. We highlight important aspects for designing a WLAN/WMAN interworking architecture and we address important Quality of Service (QoS) issues for such interworked systems like degree of QoS support provided by the technologies, QoS mapping and signalling for vertical handover. By formulating several interworking scenarios, where WLAN users with ongoing voice, video and data sessions hand over to WMAN, we study QoS and performance issues and analyse feasibility of seamless session continuity through simulations
With the emergence of millimeter-Wave (mmWave) communication technology, the capacity of mobile backhaul networks can be significantly increased. On the other hand, Mobile Edge Computing (MEC) provides an appropriate infrastructure to offload latency-sensitive tasks. However, the amount of resources in MEC servers is typically limited. Therefore, it is important to intelligently manage the MEC task offloading by optimizing the backhaul bandwidth and edge server resource allocation in order to decrease the overall latency of the offloaded tasks. This paper investigates the task allocation problem in MEC environment, where the mmWave technology is used in the backhaul network. We formulate a Mixed Integer NonLinear Programming (MINLP) problem with the goal to minimize the total task serving time. Its objective is to determine an optimized network topology, identify which server is used to process a given offloaded task, find the path of each user task, and determine the allocated bandwidth to each task on mmWave backhaul links. Because the problem is difficult to solve, we develop a two-step approach. First, a Mixed Integer Linear Program (MILP) determining the network topology and the routing paths is optimally solved. Then, the fractions of bandwidth allocated to each user task are optimized by solving a quasi-convex problem. Numerical results illustrate the obtained topology and routing paths for selected scenarios and show that optimizing the bandwidth allocation significantly improves the total serving time, particularly for bandwidth-intensive tasks.
Layer 2 Virtual Private Network (L2VPN) is widely deployed in both service provider networks and enterprises. However, legacy L2VPN solutions have scalability limitations in the context of Data Center (DC) interconnection and networking which require new approaches that address the requirements of service providers for virtual private cloud services. Recently, Ethernet VPN (EVPN) has been proposed to address many of those concerns and vendors started to deploy EVPN based solutions in DC edge routers. However, manual configuration leads to a time-consuming, error-prone configuration and high operational costs. Automating the EVPN deployment from cloud platforms such as OpenStack enhances both the deployment and flexibility of EVPN Instances (EVIs). This paper proposes a Software Defined Network (SDN) based framework that automates the EVPN deployment and management inside SDN-based DCs using OpenStack and OpenDaylight (ODL). We implemented and extended several modules inside ODL controller to manage and interact with EVIs and an interface to OpenStack that allows the deployment and configuration of EVIs. We conclude with scalability analysis of our solution.
Ethernet Virtual Private Network (EVPN) is an emerging technology that addresses the networking challenges presented by geo-distributed Data Centers (DCs). One of the major advantages of EVPN over legacy layer 2 VPN solutions is providing All-Active (A-A) mode of operation so that the traffic can truly be multi-homed on Provider Edge (PE) routers. However, A-A mode of operation introduces new challenges. In the case where the Customer Edge (CE) router is multi-homed to one or more PE routers, it is necessary that only one of the PE routers should forward Broadcast, Unknown unicast, and Multicast (BUM) traffic into the DC. The PE router that assumes the primary role for forwarding BUM traffic to the CE device is called the Designated Forwarder (DF). The proposed solution to select the DF in the EVPN standard is based on a distributed algorithm which has a number of drawbacks such as unfairness and intermittent behavior. In this paper, we introduce a Software-Defined Networking (SDN) based architecture for EVPN support, where the SDN controller interacts with EVPN control plane. We demonstrate how our solution mitigates existing problems for DF selection which leads to improved EVPN performance.
Live Virtual Machine (VM) migration has significantly improved the flexibility of modern Data Centers (DC). However, seamless live migration of a VM between geo-distributed DCs faces several challenges due to difficulties in preserving the network configuration after the migration paired with a large network convergence time. Although SDN-based approaches can speed up network convergence time, these techniques have two limitations. First, they typically react to the new topology by installing new flow rules once the migration is finished. Second, because the WAN is typically not under SDN control, they result in sub-optimal routing thus severely degrading the network performance once the VM is attached at the new location.
In this paper, we identify networking challenges for VM migration across geo-distributed DCs. Based on those observations, we design a novel long-haul VM migration scheme that overcomes those limitations. First, instead of reactively restoring connectivity after the migration, our SDN-based approach proactively restores flows across the WAN towards the new location with the help of EVPN and VXLAN overlay technologies. Second, the SDN controller accelerates the network convergence by announcing the migration to other controllers using MP-BGP control plane messages. Finally, the SDN controller resolves the sub-optimal routing problem that arises as a result of migration implementing a distributed anycast gateway. We implement our approach as extensions to the OpenDaylight controller. Our evaluation shows that our approach outperforms existing approaches in reducing the downtime by 400 ms and increasing the application performance up to 12 times.
Optimal placement of Virtual Network Functions (VNFs) in virtualized data centers enhances the overall performance of Service Function Chains (SFCs) and decreases the operational costs for mobile network operators. Maintaining an optimal placement of VNFs under changing load requires a dynamic reconfiguration that includes adding or removing VNF instances, changing the resource allocation of VNFs, and re-routing corresponding service flows. However, such reconfiguration may lead to notable service disruptions and impose additional overhead on the VNF infrastructure, especially when reconfiguration entails state or VNF migration. On the other hand, not changing the existing placement may lead to high operational costs. In this paper, we investigate the trade-off between the reconfiguration of SFCs and the optimality of the resulting placement and service flow (re)routing. We model different reconfiguration costs related to the migration of stateful VNFs and solve a joint optimization problem that aims to minimize both the total cost of the VNF placement and the reconfiguration cost necessary for repairing a suboptimal placement. Numerical results show that a small number of reconfiguration operations can significantly reduce the operational cost of the VNF infrastructure; however, too much reconfiguration may not pay off should heavy costs be involved.
The optimal placement of virtual network functions (VNFs) improves the overall performance of servicefunction chains (SFCs) and decreases the operational costs formobile network operators. To cope with changes in demands,VNF instances may be added or removed dynamically, resourceallocations may be adjusted, and servers may be consolidated.To maintain an optimal placement of SFCs when conditionschange, SFC reconfiguration is required, including the migration of VNFs and the rerouting of service-flows. However, suchreconfigurations may lead to stress on the VNF infrastructure,which may cause service degradation. On the other hand, notchanging the placement may lead to suboptimal operation,and servers and links may become congested or underutilized,leading to high operational costs. In this paper, we investigatethe trade-off between the reconfiguration of SFCs and theoptimality of their new placement and service-flow routing. Wedevelop a multi-objective genetic algorithm that explores thePareto front by balancing the optimality of the new placementand the cost to achieve it. Our numerical evaluations show thata small number of reconfigurations can significantly reduce theoperational cost of the VNF infrastructure. In contrast, toomuch reconfiguration may not pay off due to high costs. Webelieve that our work provides an important tool that helpsnetwork providers to plan a good reconfiguration strategy fortheir service chains.
Optimal placement of Virtual Network Functions (VNFs) in data centers enhances the overall performance of Service Function Chains (SFCs) and decreases the operational costs for mobile network operators. In order to cope with changes in demands, VNF instances may be added or removed dynamically, resource allocations may be adjusted, and servers may be consolidated. To maintain an optimal placement of SFC under changing conditions, dynamic reconfiguration is required including the migration of VNFs and the re-routing of service flows. However, such reconfiguration may lead to notable service disruptions and can be exacerbated when reconfiguration entails state or VNF migration, both imposing additional overhead on the VNF infrastructure. On the other hand, not changing the placement may lead to a suboptimal operation, servers and links may become congested or underutilized, leading to high operational costs. In this paper, we investigate the trade-off between the reconfiguration of SFCs and the optimality of the resulting placement and service flow routing. We model reconfiguration costs related to the migration of stateful VNFs and solve a joint optimization problem that aims to minimize both the total cost of the new placement and the reconfiguration cost necessary to achieve it. We also develop a fast multi-objective genetic algorithm that finds near-optimal solutions for online decisions. Our numerical evaluations show that a small number of reconfiguration operations can significantly reduce the operational cost of the VNF infrastructure. In contrast, too much reconfiguration may not pay off due to high costs. We believe that our work is an important tool that helps network provider to plan a good reconfiguration strategy for their service chains.
Mobile nodes are typically equipped with multiple radios and can connect to multiple radio access networks (e.g. WiFi, LTE and 5G). Consequently, it is important to design mechanisms that efficiently manage multiple network interfaces for aggregating the capacity, steering of traffic flows or switching flows among multiple interfaces. While such multi-access solutions have the potential to increase the overall traffic throughput and communication reliability, the variable latencies on different access links introduce packet delay variation which has negative effect on the application quality of service and user quality of experience. In this paper, we present a new IP-compatible multipath framework for heterogeneous access networks. The framework uses Multipath Datagram Congestion Control Protocol (MP-DCCP) - a set of extensions to regular DCCP - to enable a transport connection to operate across multiple access networks, simultaneously. We present the design of the new protocol framework and show simulation and experimental testbed results that (1) demonstrate the operation of the new framework, and (2) demonstrate the ability of our solution to manage significant packet delay variation caused by the asymmetry of network paths, by applying pluggable packet scheduling or reordering algorithms.
Networked systems have recently aimed to use multiple access networks in parallel to increase resiliency, availability and capacity. However, different paths may have different latency characteristics, which may lead to out-of-order packet delivery. This may severely impact both the end-to-end application performance and the capacity utilisation of multiaccess systems. In this paper, we show that in-network support for packet reordering for multiaccess systems that are based on multiple transport layer tunnels is beneficial for several application types. Our findings are applicable to TCP and QUIC traffic in the 3GPP ATSSS context, where we use the MP-DCCP tunneling framework with a buffer-based packet reordering approach that uses a dynamic timing threshold to cope with variation of path delays over time. We demonstrate achievable performance gains for a wide range of path latency differences and end-to-end round trip times when using different in-network reordering algorithms.
Prediction of solar power generation is important in order to optimize energy exchanges in future micro-grids that integrate a large amount of photovoltaics. However, an accurate prediction is difficult due to the uncertainty of weather phenomena that impact produced power. In this paper, we evaluate the impact of different clustering methods on the forecast accuracy for predicting hourly ahead solar power when using machine learning based prediction approaches trained on weather and generated power features. In particular, we compare clustering methods using clearness index and K-means clustering, where we use both euclidian distance and dynamic time-warping. For evaluating prediction accuracy, we develop and compare different prediction models for each of the clusters using production data from a swedish SmartGrid. We demonstrate that proper tuning of thresholds for the clearness index improves prediction accuracy by 20.19% but results in worse performance than using K-means with all weather features as input to the clustering.
Mobile Edge Clouds (MECs) address the critical needs of bandwidth-intensive, latency-sensitive mobile applications by positioning computing and storage resources at the network’s edge in Edge Data Centers (EDCs). However, the diverse, dynamic nature of EDCs’ resource capacities and user mobility poses significant challenges for resource allocation and management. Efficient EDC operation requires accurate forecasting of computational load to ensure optimal scaling, service placement, and migration within the MEC infrastructure. This task is complicated by the temporal and spatial fluctuations of computational load.We develop a novel MEC computational demand forecasting method using Federated Learning (FL). Our approach leverages FL’s distributed processing to enhance data security and prediction accuracy within MEC infrastructure. By incorporating uncertainty bounds, we improve load scheduling robustness. Evaluations on a Tokyo dataset show significant improvements in forecast accuracy compared to traditional methods, with a 42.04% reduction in Mean Absolute Error (MAE) using LightGBM and a 34.93% improvement with CatBoost, while maintaining minimal networking overhead for model transmission.
For efficient energy exchanges in smart energy grids under the presence of renewables, predictions of energy production and consumption are required. For robust energy scheduling, prediction of uncertainty bounds of Photovoltaic (PV) power production and consumption is essential. In this paper, we apply several Machine Learning (ML) models that can predict the power generation of PV and consumption of households in a smart energy grid, while also assessing the uncertainty of their predictions by providing quantile values as uncertainty bounds. We evaluate our algorithms on a dataset from Swedish households having PV installations and battery storage. Our findings reveal that a Mean Absolute Error (MAE) of 16.12W for power production and 16.34W for consumption for a residential installation can be achieved with uncertainty bounds having quantile loss values below 5W. Furthermore, we show that the accuracy of the ML models can be affected by the characteristics of the household being studied. Different households may have different data distributions, which can cause prediction models to perform poorly when applied to untrained households. However, our study found that models built directly for individual homes, even when trained with smaller datasets, offer the best outcomes. This suggests that the development of personalized ML models may be a promising avenue for improving the accuracy of predictions in the future.
We investigate the problem of optimally placing virtual network functions in 5G-based virtualized infrastructures according to a green paradigm that pursues energy-efficiency. This optimization problem can be modelled as an articulated 0-1 Linear Program based on a flow model. Since the problem can prove hard to be solved by a state-of-the-art optimization software, even for instances of moderate size, we propose a new fast matheuristic for its solution. Preliminary computational tests on a set of realistic instances return encouraging results, showing that our algorithm can find better solutions in considerably less time than a state-of-the-art solver.
802.11-based wireless mesh networks are seen as a means for providing last mile connections to next generation networks. Due to the low deployment cost and the mature technology used, they are scalable, easy to implement and robust. With an increasing coverage of wireless networks, VoIP becomes a cheaper alternative for traditional and cellular telephony. In this paper, we carry out a feasibility study of VoIP in a dual radio mesh environment. Heading towards 802.11s, we present the design of a mesh testbed and methodology for performing the measurements. Additionally, we address the problem that small voice packets introduce a high overhead leading to a low voice capacity of 802.11 based mesh networks. In order to alleviate this problem and increase the voice capacity, a novel packet aggregation mechanism is presented and evaluated using the ns-2 simulator.
The dynamicity of real-world systems poses a significant challenge to deployed predictive machine learning (ML) models. Changes in the system on which the ML model has been trained may lead to performance degradation during the system’s life cycle. Recent advances that study non-stationary environments have mainly focused on identifying and addressing such changes caused by a phenomenon called concept drift. Different terms have been used in the literature to refer to the same type of concept drift and the same term for various types. This lack of unified terminology is set out to create confusion on distinguishing between different concept drift variants. In this paper, we start by grouping concept drift types by their mathematical definitions and survey the different terms used in the literature to build a consolidated taxonomy of the field. We also review and classify performance-based concept drift detection methods proposed in the last decade. These methods utilize the predictive model’s performance degradation to signal substantial changes in the systems. The classification is outlined in a hierarchical diagram to provide an orderly navigation between the methods. We present a comprehensive analysis of the main attributes and strategies for tracking and evaluating the model’s performance in the predictive system. The paper concludes by discussing open research challenges and possible research directions.
Load forecasting is a crucial topic in energy management systems (EMS) due to its vital role in optimizing energy scheduling and enabling more flexible and intelligent power grid systems. As a result, these systems allow power utility companies to respond promptly to demands in the electricity market. Deep learning (DL) models have been commonly employed in load forecasting problems supported by adaptation mechanisms to cope with the changing pattern of consumption by customers, known as concept drift. A drift magnitude threshold should be defined to design change detection methods to identify drifts. While the drift magnitude in load forecasting problems can vary significantly over time, existing literature often assumes a fixed drift magnitude threshold, which should be dynamically adjusted rather than fixed during system evolution. To address this gap, in this paper, we propose a dynamic drift-adaptive Long Short-Term Memory (DA-LSTM) framework that can improve the performance of load forecasting models without requiring a drift threshold setting. We integrate several strategies into the framework based on active and passive adaptation approaches. To evaluate DA-LSTM in real-life settings, we thoroughly analyze the proposed framework and deploy it in a real-world problem through a cloud-based environment. Efficiency is evaluated in terms of the prediction performance of each approach and computational cost. The experiments show performance improvements on multiple evaluation metrics achieved by our framework compared to baseline methods from the literature. Finally, we present a trade-off analysis between prediction performance and computational costs.
A new generation of Wireless Local Area Networks (WLANs) will make its appearance in the market in the forthcoming years based on the amendments to the IEEE 802.11 standards that have recently been approved or are under development. Examples of the most expected ones are IEEE 802.11aa (Robust Audio Video Transport Streaming), IEEE 802.11ac (Very-high throughput at < 6 GHz), IEEE 802.11af (TV White Spaces) and IEEE 802.11ah (Machine-to-Machine communications) specifications. The aim of this survey is to provide a comprehensive overview of these novel technical features and the related open technical challenges that will drive the future WLAN evolution. In contrast to other IEEE 802.11 surveys, this is a use case oriented study. Specifically, we first describe the three key scenarios in which next-generation WLANs will have to operate. We then review the most relevant amendments for each of these use cases focusing on the additional functionalities and the new technologies they include, such as multi-user MIMO techniques, groupcast communications, dynamic channel bonding, spectrum databases and channel sensing, enhanced power saving mechanisms and efficient small data transmissions. We also discuss the related work to highlight the key issues that must still be addressed. Finally, we review emerging trends that can influence the design of future WLANs, with special focus on software-defined MACs and the internet-working with cellular systems.
Managing and scaling virtual network function(VNF) service chains require the collection and analysis ofnetwork statistics and states in real time. Existing networkfunction virtualization (NFV) monitoring frameworks either donot have the capabilities to express the range of telemetryitems needed to perform management or do not scale tolarge traffic volumes and rates. We present IntOpt, a scalableand expressive telemetry system designed for flexible VNFservice chain network monitoring using active probing. IntOptallows to specify monitoring requirements for individual servicechain, which are mapped to telemetry item collection jobsthat fetch the required telemetry items from P4 (programmingprotocol-independent packet processors) programmable dataplaneelements. In our approach, the SDN controller creates theminimal number of monitoring flows to monitor the deployedservice chains as per their telemetry demands in the network.We propose a simulated annealing based random greedy metaheuristic(SARG) to minimize the overhead due to activeprobing and collection of telemetry items. Using P4-FPGA, webenchmark the overhead for telemetry collection and compareour simulated annealing based approach with a na¨ıve approachwhile optimally deploying telemetry collection probes. Ournumerical evaluation shows that the proposed approach canreduce the monitoring overhead by 39% and the total delays by57%. Such optimization may as well enable existing expressivemonitoring frameworks to scale for larger real-time networks.
The emergence of Network Functions Virtualization (NFV) is being heralded as an enabler of the recent technologies such as 5G/6G, IoT and heterogeneous networks. Existing NFV monitoring frameworks either do not have the capabilities to express the range of telemetry items needed to perform management or do not scale to large traffic volumes and rates. We present IntOpt, a scalable and expressive telemetry system designed for flexible NFV monitoring using active probing and P4. IntOpt allows us to specify monitoring requirements for individual service chain, which are mapped to telemetry item collection jobs that fetch the required telemetry items from P4 programmable data-plane elements. We propose mixed integer linear program (MILP) as well as a simulated annealing based random greedy (SARG) meta-heuristic approach to minimize the overhead due to active probing and collection of telemetry items. Using P4-FPGA, we benchmark the overhead for telemetry collection. Our numerical evaluation shows that the proposed approach can reduce monitoring overheads by 39% and monitoring delays by 57%. Such optimization may as well enable existing expressive monitoring frameworks to scale for larger real-time networks.
SDN aims to facilitate the management of increasingly complex, dynamic network environments and optimize the use of the resources available therein with minimal operator intervention. To this end, SDN controllers maintain a global view of the network topology and its state. However, the extraction of information about network flows and other network metrics remains a non-trivial challenge. Network applications exhibit a wide range of properties, posing diverse, often conflicting, demands towards the network. As these requirements are typically not known, controllers must rely on error-prone heuristics to extract them. In this work, we develop a framework which allows applications deployed in an SDN environment to explicitly express their requirements to the network. Conversely, it allows network controllers to deploy policies on end-hosts and to supply applications with information about network paths, salient servers and other relevant metrics. The proposed approach opens the door for fine grained, application-aware resource optimization strategies in SDNs
For delivering multimedia services over (wireless) networks it is important that the mechanisms which negotiate and optimize content delivery under resource constraints take into account user perceived quality in order to improve user satisfaction. Within the scope of the IP Multimedia Subsystem (IMS) architecture a novel application server may be added which handles multi-user multi-flow Quality of Experience (QoE) negotiation and adaptation for heterogeneous user sessions. Based on a mathematical model, which takes into account the characteristics of audio, video and data sessions for QoE optimization, we develop several optimization algorithms to be used by the application server to maximize overall user defined QoE parameters for all ongoing multimedia sessions, subject to network resource constraints. Our results show that a greedy based approach provides a reasonable compromise in terms of run-time and sub-optimality for the overall QoE based resource allocations.
Utilizing multiple access networks such as 5G, 4G, and Wi-Fi simultaneously can lead to increased robustness, resiliency, and capacity for mobile users. However, transparently implementing packet distribution over multiple paths within the core of the network faces multiple challenges including scalability to a large number of customers, low latency, and high-capacity packet processing requirements. In this paper, we offload congestion-aware multipath packet scheduling to a smartNIC. However, such hardware acceleration faces multiple challenges due to programming language and platform limitations. We implement different multipath schedulers in P4 with different complexity in order to cope with dynamically changing path capacities. Using testbed measurements, we show that our CMon scheduler, which monitors path congestion in the data plane and dynamically adjusts scheduling weights for the different paths based on path state information, can process more than 3.5 Mpps packets 25 μs latency.
In multi-hop wireless mesh networks (WMN) Voice over IP (VoIP) suffers from large overhead created by the IP/UDP/RTP/MAC header and time due to collisions. As a result the VoIP capacity of WMNs is small. As shown in [1] a 3 hop WMN at a data rate of 2Mbit/s can only support 3 concurrent VoIP flows. [1], [2] and [3] propose the use of packet aggregation in order to reduce the overhead and thus increase the capacity. The idea behind packet aggregation is to combine multiple small packets into a single larger one, reducing overhead and collision probability in multi-hop environment. This paper presents insights into an implementation of an aggregation scheme in the Linux Kernel and its experimental evaluation
This document is the last deliverable of WPR.11 and presents an overview of the final activities carried out within the NEWCOM++ Workpackage WPR.11 during the last 18 months. We provide a description of the most consolidated Joint Research Activities (JRAs) and the main results so far obtained. We also address some considerations on the future activities which are expected to continue at the end of NEWCOM++
Traditional networks are transformed to enable full integrationof heterogeneous hardware and software functions, that are configuredat runtime, with minimal time to market, and are provided to theirend users on “as a service” principle. Therefore, a countless number ofpossibilities for further innovation and exploitation opens up. NetworkFunction Virtualization (NFV) and Software-Defined Networking (SDN)are two key enablers for such a new flexible, scalable, and service-orientednetwork architecture. This chapter provides an overview of QoS-awarestrategies that can be used over the levels of the network abstractionaiming to fully exploit the new network opportunities. Specifically, wepresent three use cases of integrating SDN and NFV with QoS-awareservice composition, ranging from the energy efficient placement of virtualnetwork functions inside modern data centers, to the deployment ofdata stream processing applications using SDN to control the networkpaths, to exploiting SDN for context-aware service compositions.
Wireless mesh networks are an emerging paradigm for future broadband wireless access networks, with manyapplication areas ranging from content distribution over community networking and providing backhaul networking for sensor devices. In wireless mesh networks, clients connect to wireless routers which are equipped with one or more wireless cards and relay the packets over the wireless links towards internetgateways or the destination. Peer-to-Peer applications are an important class of applications which contributes nowadays to the majority of internet traffic. Therefore, it is important to provide high capacity in the mesh network to support them. However, the capacity of wireless mesh networks depends on many factors such as network topology and size, traffic volume and pattern, interfaces per node and channel assignment scheme used, modulation schemes, routing approaches, etc. In this paper, we develop an analytical framework which allows to estimate theachievable capacity of a wireless mesh network when peer-to-peer applications download many flows simultaneously. The model is based on the collision domain concept and incorporates various channel assignment, replication and peer selection strategies. We investigate the achievable capacity for various scenarios and study the impact of different parameters such as the number of channels or radios used
In non-static multi-radio/multi-channel wireless mesh networks architectures such as Net-X, mesh nodes need to switch channels in order to communicate with different neighbors. Present channel schedulers do not consider the requirements of real time traffic such as voice over IP. Thus the resulting quality is low. We propose a novel channel scheduler for the Net-X platform that takes into account packet priorities. We evaluate the algorithm on the KAUMesh testbed. Our algorithm outperforms the standard round-robin scheduler both in terms of average delay and jitter.
Internet connected wireless multi-hop networks are an interesting alternative for providing broadband wireless access. In order for the network to be transparent, the same services need to be available as in standard infrastructure wireless deployments. However, there is a significant challenge in providing services such as authentication, name resolution, VoIP over multi-hop mesh networks as dedicated servers implementing those services might be not available. Therefore, deploying overlay networks in the mesh to decentralize those services and move towards a Peer-to-Peer paradigm is an interesting approach. However, the multi-hop nature of wireless mesh networks and the restrictions in resource availability might cause problems when deploying overlay networks on top of such environments. In this paper, we investigate the overhead and trade-offs when deploying a structured overlay solution such as the Bamboo DHT over wireless multi-hop mesh networks. We provide various simulation results characterizing the overhead of management and control traffic and give recommendations for performance improvement.
Recently, Voice over IP (VoIP) has become an important service for the future internet. However, for ubiquitous wireless VoIP services, greater coverage will be necessary as promised by the advent of e.g. 802.11 WLAN based wireless meshed networks. Unfortunately, the transmission of small (voice) packets imposes high overhead which leads to low capacity for VoIP over 802.11 based multihop meshed networks. In this work, we present a novel packet aggregation mechanism that significantly enhances capacity of VoIP in wireless meshed networks while still maintaining satisfactory voice quality. Extensive experiments using network simulation ns-2 confirm that our packet aggregation algorithm can lead to a significantly increase in the number of supported concurrent VoIP flows over a variety of different hop numbers while reducing the MAC layer contention
Opportunistic networks represent a new frontier for networking research as due to node mobility the network might become disconnected. Such intermittent connectivity imposes challenges to protocol design, especially when information access might require the availability of updated information about resources shared by mobile nodes. An opportunistic network can be seen as a peer-to-peer network where resources should be located in a distributed way. Numerous solutions for P2P resource management have been proposed in the last years. Among the different approaches being considered, Distributed Hash Table (DHT) based schemes offer the advantages of a distributed approach which can be tuned to network scalability. In this paper we consider two well known P2P DHT-based solutions for wireless networks denoted as Bamboo and Georoy, and compare their performance in a multihop wireless scenario. We evaluate scalability and key lookup behavior for different network sizes. The results allow us to gain insights into protocol behavior which allows to select for a given network configuration the appropriate scheme.
Opportunistic networking represents a promising paradigm for support of communications, specifically in infrastructure-less scenarios such as remote areas communications. In principle in opportunistic environments, we would like to make available all the applications thought for traditional wired and wireless networks like file sharing and content distribution. In this paper we present a delay tolerant scenario for file sharing applications in rural areas where an opportunistic approach is exploited. In order to support communications, we compare two peer-to-peer (P2P) schemes initially conceived for wireless networks and prove their applicability and usefulness to a DTN scenario where replication of resources can be used to improve the lookup performance and the network can be occasionally connected by mean of a data mule. Simulation results show the suitability of the schemes and allow to derive interesting design guidelines on the convenience and applicability of such approaches
Mobile Ad Hoc networking has been considered as one of the most important technologies to support future Ubiquitous and Pervasive Computing Scenarios and internet connected MANETs will be an integral part of future wireless networks. For providing multimedia services and Voice over IP in such environment, support for Session Initiation Protocol (SIP) is essential. A MANET is a decentralized collection of autonomous nodes but a SIP infrastructure requires centralized proxies and registrar servers. In this paper, we study the implications of using standard SIP architecture in internet connected MANETs. We analyze performance limitations of SIP service scalability when centralized proxies/registrars located in the access network are used by MANET nodes. We also present and compare an alternative approach to provide SIP services in internet connected MANETs in order to minimize the impact of such performance limitations
Recently, Mobile Ad-hoc Networks (MANETs) have gained a lot of attraction because they are flexible, self-configurable and fast to deploy. Systems beyond 4G are likely to consist of a combination of heterogeneous wireless technologies and naturally might comprise MANETs as one component. In order to provide multimedia services such as Voice over IP in such environment, support for Session Initiation Protocol (SIP) is essential. A MANET is a decentralized collection of autonomous nodes but a SIP infrastructure requires centralized proxies and registrar servers. In this paper, we first study the implications of using standard SIP architecture in internet connected MANETs. We analyze limitations of SIP service scalability when centralized proxies/registrars located in the Access Network are used by MANET nodes. Finally, we present alternative approaches to provide SIP services in such environment to avoid such limitations
Traditional Voice over IP (VoIP) systems is based on client/server architecture, which is not applicable to Mobile Ad Hoc Networks (MANETs), which are a decentralized collection of autonomous nodes. However, internet connectivity for MANETs becomes important as internet connected MANETs can serve as hot spot extension in 4G scenarios. Here, MANET nodes can reach any wired node thus potentially registering with SIP proxies in the fixed network becomes a viable solution. In order to study the implications of using VoIP systems in internet connected MANETs we present in this paper simulation result of SIP service scalability when centralized proxies/registrars located in the Access Network are used by MANET nodes. Alternative approaches to provide SIP services in such environment are also discussed to improve performance
Mobile Ad Hoc networking has been considered as one of the most important technologies to support future Ubiquitous and Pervasive Computing Scenarios and internet connected MANETs will be an integral part of future wireless networks. For providing multimedia services and Voice over IP in such environment, support for Session Initiation Protocol (SIP) is essential. A MANET is a decentralized collection of autonomous nodes but a SIP infrastructure requires centralized proxies and registrar servers. In this paper, we study the implications of using standard SIP architecture in internet connected MANETs. We analyze performance limitations of SIP service scalability when centralized proxies/registrars located in the Access Network are used by MANET nodes. We also present and compare an alternative approach to provide SIP services in internet connected MANETs in order to minimize the impact of such performance limitations
In this paper we study the interactions between peer selection, routing and channel assignment while deploying Peer-to-Peer (P2P) systems over multi-channel multi-radio Wireless Mesh Networks (WMNs). In particular we propose Bestpeer, a novel peer selection algorithm for WMNs which incorporates a cross-layer path load metric and integrates multi-path load balancing capability throughout the P2P download process in order to achieve faster resource disseminations. By comparing Bestpeer with BitTorrent through extensive simulations, we show that we can reduce the time to disseminate a resource by up to 40 %. The key mechanism used by Bestpeer to increase performance in WMNs is the effective exploitation of cross-layer information made available from the routing and channel assignment during the peer selection process in order to balance the load along high capacity paths
In this paper we address the impact of ACI on dynamic spectrum allocation in multi-radio systems. In particular, we present the benefits of dynamic spectrum allocation by adapting channel bandwidth and distance in order to mitigate ACI in multi-radio mesh networks. Based on our measurement campaign in an indoor mesh testbed, which we extended in order to use adaptive channel width, we evaluate the performance in terms of network throughput. We show that by using channel bandwidth adaptation and antenna separation, the impact of ACI can be significantly reduced. The performed experiments give important insights into ACI in multi-radio cognitive systems which help to develop e.g. better channel assignment algorithms or capacity estimation methods
Wireless multi-hop networks such as mobile ad-hoc (MANET) or wirelessmesh networks (WMN) have attracted big research efforts during the last yearsas they have huge potential in several areas such as military communications, fastinfrastructure replacement during emergency operations, extension of hotspots oras an alternative communication system. Due to various reasons, such as characteristicsof wireless links, multi-hop forwarding operation, and mobility of nodes,performance of traditional peer-to-peer applications is rather low in such networks.In this book chapter, we provide a comprehensive and in-depth survey on recent researchon various approaches to provide peer-to-peer services in wireless multi-hopnetworks. The causes and problems for low performance of traditional approachesare discussed. Various representative alternative approaches to couple interactionsbetween the peer-to-peer overlay and the network layer are examined and compared.Some open questions are discussed to stimulate further research in this area
In TSN networks, proper end-station configuration is essential to ensure the timely and reliable delivery of time-sensitive data, meeting strict end-To-end Quality of Service (QoS) criteria. However, the complexity of the configuration process requires a significant manual effort, which makes real-Time application development on standard Operating Systems such as Linux a challenge. In this paper, we propose a sim-ple yet functional approach to automate the configuration of Linux-based TSN end-stations within TSN networks by adding a TSN layer on top of the networking system services and defining a configuration protocol tailored for the centralized network/distributed user configuration mode. Evaluation results demonstrate minimal overhead during stream addition, achieving hundreds-of-millisecond-Ievel configuration times and enabling a hassle-free Plug-And-Play mode of operation.