The basic idea of QoS is to provide mechanisms that can offer different service levels, which are expressed through well-defined parameters that are specified at run-time on the basis of need. Bit rate, throughput, delay, jitter, and packet loss rate are all examples of common QoS parameters suggested for packet networks. These parameters are all aimed to express (and guarantee) a certain service level with respect to reliability and/or performance. In this paper, we investigate how security can be treated as yet another QoS parameter through the use of tunable security services. The main idea with this work is to let users specify a trade-off between security and performance through the choice of available security configuration(s). The performance metric used is latency. The concept is illustrated using the IEEE 802.11i wireless local area networking standard
Security should be thought of as a tunable system attribute that allows users to request a specific protection level as a service from the system. This approach will be suitable in future networking environments with heterogeneous devices that have varying computing resources. The approach is also appropriate for multimedia applications that require tuning of the protection level to maintain performance at levels that are acceptable to users. In this paper, we investigate data protection services for network transfers that are designed to offer variable protection levels and propose a taxonomy for such services. The taxonomy provides a unified terminology for dynamic data protection services and a framework in which they can systematically be inspected, evaluated, and compared. The taxonomy is also intended to provide a basis for the development and identification of current and future user and/or application needs. It comprises four dimensions: type of protection service, protection level, protection level specification, and adaptiveness. On the basis of the taxonomy, a survey and categorization of existing dynamic data protection services for network transfers are made
Security should be thought of as a tunable system attribute that allows users to request a specific protection level as a service from the system. This approach will be suitable in future networking environments with heterogeneous devices that have varying computing resources. The approach is also appropriate for multimedia applications that require tuning the protection level to maintain performance at levels that are acceptable to users. In this paper, we survey a number of existing data protection services for network transfers that are designed to offer variable protection levels. The services are classified according to a taxonomy proposed in the paper
Security is an increasingly important issue for networked services. However, since networked environments may exhibit varying networking behavior and contain heterogeneous devices with varying resources tunable security services are needed. A tunable security service is a service that provides different security configurations that are selected, and possibly altered, at run-time. In this paper, we propose a conceptual model for analysis and design of tunable security services. The proposed model can be used to describe and compare existing tunable security services and to identify missing requirements. Five previously proposed services are analyzed in detail in the paper. The analysis illustrates the powerfulness of the model, and highlights some key aspects in the design of tunable security services. Based on the conceptual model, we also present a high-level design methodology that can be used to identify the most appropriate security configurations for a particular scenario
Data protection is an increasingly important issue in today's communication networks. Traditional solutions for protecting data when transferred over a network are almost exclusively based on cryptography. As a complement, we propose the use of multiple physically separate paths to accomplish data protection. A general concept for providing physical separation of data streams together with a threat model is presented. The main target is delay-sensitive applications such as telephony signaling, live TV, and radio broadcasts that require only lightweight security. The threat considered is malicious interception of network transfers through so-called eavesdropping attacks. Application scenarios and techniques to provide physically separate paths are discussed
Network security is an increasingly important issue. Traditional solutions for protecting data when transferred over the network are almost exclusively based on cryptography. As a complement, we propose the use of SCTP and its support for physically separate paths to accomplish protection against eavesdropping attacks near the end points.
To achieve an appropriate tradeoff between security and performance for wireless applications, a tunable and differential treatment of security is required.
In this paper, we present a tunable encryption service designed as a middleware that is based on a selective encryption paradigm. The core component of the middleware provides block-based selective encryption. Although the selection of which data to encrypt is made by the sending application and is typically content-dependent, the representation used by the core component is application and content-independent. This frees the selective decryption module at the receiver from the need for application or content-specific knowledge. The sending application specifies the data to encrypt either directly or through a set of highlevel application interfaces. A prototype implementation of the middleware isdescribed along with an initial performance evaluation. The experimental results demonstrate that the generic middleware service offers a high degree of security adaptiveness at a low cost.
In this paper, we investigate the tunable privacy features provided by Internet Explorer version 6 (IE6), Mix Net and Crowds, by using a conceptual model for tunable security services. A tunable security service is defined as a service that has been explicitly designed to offer various security configurations that can be selected at run-time. Normally, Mix Net and Crowds are considered to be static anonymity services, since they were not explicitly designed to provide tunability. However, as discussed in this paper, they both contain dynamic elements that can be used to utilize the trade-off between anonymity and performance. IE6, on the other hand, was indeed designed to allow end users to tune the level of privacy when browsing the Internet
In this paper, we investigate the tunable features provided by Mix-Nets and Crowds using a conceptual model for tunable secu- rity services. A tunable security service is deflned as a service that has been explicitly designed to ofier various security levels that can be se- lected at run-time. Normally, Mix-Nets and Crowds are considered to be static anonymity services, since they were not explicitly designed to provide tunability. However, as discussed in this paper, they both con- tain dynamic elements that can be used to achieve a tradeofi between anonymity and performance
n this paper, we start to investigate the security implications of selective encryption. We do this by using the measure guesswork, which gives us the expected number of guesses that an attacker must perform in an optimal brute force attack to reveal an encrypted message. The characteristics of the proposed measure are investigated for zero-order languages. We also introduce the concept of reduction chains to describe how the message (or rather search) space changes for an attacker with different levels of encryption.
In this paper, we start to investigate the security implications of selective encryption. We do this by using the measure guesswork, which gives us the expected number of guesses that an attacker performs in an optimal brute force attack to reveal an encrypted message. The characteristics of the proposed measure are only investigated for zero-order languages, and we give some basic initial results. The work is in progress and later papers will examine higher order of languages.
"Roam like Home" is the initiative of the European Commission to end the levy of extra charges when roaming within the European region. As a result, people can use data services more freely across Europe. However, the implications of roaming solutions on network performance have not been carefully examined yet. This paper provides an in-depth characterization of the implications of international data roaming within Europe. We build a unique roaming measurement platform using 16 different mobile networks deployed in 6 countries across Europe. Using this platform, we measure different aspects of international roaming in 4G networks in Europe, including mobile network configuration, performance characteristics, and quality of experience. We find that operators adopt a common approach to implement roaming called Home-routed roaming. This results in additional latency penalties of 60 ms or more, depending on geographical distance. This leads to worse browsing performance, with an increase in the metrics related to Quality of Experience (QoE) of users (Page Load time and Speed Index) in the order of 15-20%. We further analyze the impact of latency on QoE metrics in isolation and find that the penalty imposed by Home Routing leads to degradation on QoE metrics up to 150% in case of intercontinental roaming. We make our dataset public to allow reproducing the results.
The Domain Name System (DNS) is a critical Internet infrastructure that translates human-readable domain names to IP addresses. It was originally designed over 35 years ago and multiple enhancements have since then been made, in particular to make DNS lookups more secure and privacy preserving. Query name minimization (qmin) was initially introduced in 2016 to limit the exposure of queries sent across DNS and thereby enhance privacy. In this paper, we take a look at the adoption of qmin, building upon and extending measurements made by De Vries et al. in 2018. We analyze qmin adoption on the Internet using active measurements both on resolvers used by RIPE Atlas probes and on open resolvers. Aside from adding more vantage points when measuring qmin adoption on open resolvers, we also increase the number of repetitions, which reveals conflicting resolvers – resolvers that support qmin for some queries but not for others. For the passive measurements at root and Top-Level Domain (TLD) name servers, we extend the analysis over a longer period of time, introduce additional sources, and filter out non-valid queries. Furthermore, our controlled experiments measure performance and result quality of newer versions of the qmin -enabled open source resolvers used in the previous study, with the addition of PowerDNS. Our results, using extended methods from previous work, show that the adoption of qmin has significantly increased since 2018. New controlled experiments also show a trend of higher number of packets used by resolvers and lower error rates in the DNS queries. Since qmin is a balance between performance and privacy, we further discuss the depth limit of minimizing labels and propose the use of a public suffix list for setting this limit.
In this article we present a selection from a vast range of experiments run with MONROE, our open experiment as a service (EaaS) platform for measurements and experimentation in Mobile Broadband Networks. We show that the platform can be used to benchmark network performance in a repeatable and controlled manner thanks to the collection of a rich set of geotagged metadata and the execution of discretionary user experiments. Indeed, with the sheer amount of data collected from 12 commercial mobile operators across Europe, MONROE offers an unprecedented opportunity to monitor, analyze and ultimately improve the status of current and future mobile broadband networks. Besides, we show how flexibly the platform allows combining metadata and experimental data series during the experiments or by means of post-processing, and show results produced by our own experiments as well as comment on results obtained by external research groups and developers that have been granted access to our platform.
"Roam like Home" is the initiative of the European Commission (EC) to end the levy of extra charges when roaming within the European region. As a result, people are able to use data services more freely across Europe. However, the implications roaming solutions have on performance have not been carefully examined. This paper provides an in-depth characterization of the implications of international data roaming within Europe. We build a unique roaming measurement platform using 16 different mobile networks deployed in six countries across Europe. Using this platform, we measure different aspects of international roaming in 3G and 4G networks, including mobile network configuration, performance characteristics, and content discrimination. We find that operators adopt common approaches to implementing roaming, resulting in additional latency penalties of ∼60 ms or more, depending on geographical distance. Considering content accessibility, roaming poses additional constraints that leads to only minimal deviations when accessing content in the original country. However, geographical restrictions in the visited country make the picture more complicated and less intuitive.
This paper explores the design trade-offs required for an Internet transport protocol to effectively support web access. It identifies a set of distinct transport mechanisms and explores their use with a focus on multistreaming. The mechanisms are studied using a practical methodology that utilise the range of transport features provided by TCP and SCTP. The results demonstrate the relative benefit of key transport mechanisms and analyse how these impact web access performance. Our conclusions help identify the root causes of performance impairments and suggest appropriate choices guiding the design of a web transport protocol. Performing this analysis at the level of component transport mechanisms enables the results to be utilised in the design of new transport protocols, such as IETF QUIC.
Providing multi-connectivity services is an important goal for next generation wireless networks, where multiple access networks are available and need to be integrated into a coherent solution that efficiently supports both reliable and non reliable traffic. Based on virtual network interfaces and per path congestion controlled tunnels, the MP-DCCP based multiaccess aggregation framework presents a novel solution that flexibly supports different path schedulers and congestion control algorithms as well as reordering modules. The framework has been implemented within the Linux kernel space and has been tested over different prototypes. Experimental results have shown that the overall performance strongly depends upon the congestion control algorithm used on the individual DCCP tunnels, denoted as CCID. In this paper, we present an implementation of the BBR (Bottleneck Bandwidth Round Trip propagation time) congestion control algorithm for DCCP in the Linux kernel. We show how BBR is integrated into the MP-DCCP multi-access framework and evaluate its performance over both single and multi-path environments. Our evaluation results show that BBR improves the performance compared to CCID2 (TCPlike Congestion Control) for multi-path scenarios due to the faster response to changes in the available bandwidth, which reduces latency and increases performance, especially for unreliable traffic. The MP-DCCP framework code, including the new CCID5 is available as OpenSource1.
Currently, improving the performance of Big Data in general and velocity in particular is challenging due to the inefficiency of current network management, and the lack of coordination between the application layer and the network layer to achieve better scheduling decisions, which can improve the Big Data velocity performance. In this chapter, we discuss the role of recently emerged software defined networking (SDN) technology in helping the velocity dimension of Big Data. We start the chapter by providing a brief introduction of Big Data velocity and its characteristics and different modes of Big Data processing, followed by a brief explanation of how SDN can overcome the challenges of Big Data velocity. In the second part of the chapter, we describe in detail some proposed solutions which have applied SDN to improve Big Data performance in term of shortened processing time in different Big Data processing frameworks ranging from batch-oriented, MapReduce-based frameworks to real-time and stream-processing frameworks such as Spark and Storm. Finally, we conclude the chapter with a discussion of some open issues.
The emergence of two new technologies, namely Software Defined Networking (SDN) and Network Function Virtualization (NFV) have radically changed the development of network functions and the evolution of network architectures. These two technologies bring to mobile operators the promises of reducing costs, enhancing network flexibility and scalability, and shortening the time-to-market of new applications and services. With the advent of SDN and NFV and their offered benefits, the mobile operators are gradually changing the way how they architect their mobile networks to cope with ever-increasing growth of data traffic, massive number of new devices and network accesses, and to pave the way towards the upcoming fifth generation (5G) networking. This paper aims at providing a comprehensive survey of state-of-the-art research work, which leverages SDN and NFV into the most recent mobile packet core network architecture, Evolved Packet Core (EPC). The research work is categorized into smaller groups according to a proposed four-dimensional taxonomy reflecting the (1) architectural ap- proach, (2) technology adoption, (3) functional implementation, and (4) deployment strategy. Thereafter, the research work is exhaustively compared based on the proposed taxonomy and some added attributes and criteria. Finally, the paper identifies and discusses some major challenges and open issues such as scalability and reliability, optimal resource scheduling and allocation, management and orchestration, network sharing and slicing that raise from the taxonomy and comparison tables that need to be further investigated and explored.
Protection and automation in a smart grid environmentoften have stringent real-time communication requirementsbetween devices within a substation as well as between distantlylocated substations. The Generic Object Oriented SubstationEvent (GOOSE) messaging service has been proposed to achievethis goal as it allows to transfer time-critical information within afew milliseconds. However, the transmission of GOOSE messagesare often limited to a small Local Area Network (LAN).In this paper, we propose the use of the fifth generation ofmobile networks (5G) as a means to transport GOOSE messagesin a large scale smart grid environment. The end-to-end delay ismeasured between GOOSE devices over an 5G network with thefocus on the core network using the Open5GCore platform in alab environment. Although there is a lack of a real radio accessnetwork, the experimental results confirm that the delay withinthe rest of the 5G network is small enough for it to be feasiblefor inter-substation GOOSE transmissions.
Protection and automation in a smart grid environment often have stringent real-time communication requirements between devices within a substation, as well as between distantly located substations. The Generic Object Oriented Substation Event (GOOSE) protocol has been proposed to achieve this goal as it allows to transfer time-critical information within a few milliseconds. However, the transmission of GOOSE messages is often limited to a small Local Area Network (LAN). An earlier work has proposed to use the fifth generation of mobile networks (5G) as a means to transport IP-based GOOSE messages in a large-scale smart grid environment. On the basis of this work, this paper designs and implements an alternative solution for Ethernet-based GOOSE communication over a virtualized 5G core network that does not require any modification of the existing network protocol stack and thus is much easier to deploy. Our experimental results show that the delay introduced by the core network is in the order of sub milliseconds, while the oneway delay without a real radio access network, and without background traffic, is less than 1 ms. Moreover, these delays can be significantly reduced with a container-based deployment rather than a virtual machine-based one. Assuming a 1-ms delay budget for a 5G radio access network, our evaluation confirms that it is indeed feasible to use 5G for GOOSE transmission in IEC 61850 substation automation systems.
Communication within substatation automation systems in a smart grid environment is often time-critical. The time required for exchanging information between intelligent devices should be within a few milliseconds. The International Electrotechnical Commission (IEC) 61850 standard has proposed the Generic Object Oriented Substation Event (GOOSE) protocol to achieve this goal. However, the transmission of GOOSE messages is often limited to a small Local Area Network (LAN). In this demo, we demonstrate the feasibility of using 5G for GOOSEbased time critical communication in a large-scale smart-grid environment, and present a deployable 5G core solution using container-based virtualization technology. The radio part of the demo is emulated. The demo also shows that the delay introduced by the core network is in the order of sub milliseconds, while the one-way delay without a real radio access network is less than 1 ms, which is well below the total delay budget for GOOSE.
For many years, the continuous proliferation of mobile devices and their applications generate a surge of signaling traffic in the control plane of the mobile packet core network. As a consequence, the control plane will potentially become a bottleneck if not properly managed. We focus on the load balancing of a virtualized and distributed Mobility Management Entity (vMME), which is the key control plane element in the 4G and 5G non-standalone cores. Most of existing works use the simple and static load balancing approaches such as roundrobin and consistent hashing, which do not work well in a heterogeneous environment. In this paper, we developed three adaptive algorithms in which the balancing decision takes into account the dynamics of the system such as the vMME load, the completion time of a request served by a vMME, and the number of pending requests queued at a vMME. The evaluation of our three proposed load-balancing algorithms in an Open5GCore testbed suggests that the latency-aware scheme helps shorten the completion time of the signaling requests by up to five times the static and dynamic schemes in those cases the link delay between the load balancer and the vMMEs differ significantly.
5G with its great capabilities is believed to play a crucial role in different vertical sectors such as automotive, healthcare, and energy, etc. In this paper, we present a study on the adoption of 5G into a smart grid environment, in particular, a power grid substation automation system. In such a system, the communication between electrical devices is often within a few milliseconds. To verify the proposed solution, we conducted a set of experiments using the Generic Object Oriented Substation Event (GOOSE) protocol, a standard protocol used in power grid substation automation systems, over a virtualized 5G network. Our experimental results show that the delay introduced by the core network is in the order of sub milliseconds, while the one-way delay without a real radio access network, and without background traffic, is less than 1 ms. Moreover, these delays can be significantly reduced with a container-based deployment rather than a virtual machine-based one.
In this paper, we aim at tackling the scalability problem of the Mobility Management Entity (MME), which plays a crucial role of handling control plane traffic in the current 4G Evolved Packet Core as well as the next generation mobile core, 5G. One of the solutions to this problem is to virtualize the MME by applying Network Function Virtualization principles and then deploy it as a cluster of multiple virtual MME instances (vMMEs) with a front-end load balancer. Although several designs have been proposed, most of them assume the use of simple algorithms such as random and round-robin to balance the incoming traffic without any performance assessment. To this end, we implemented a weighted round robin algorithm which takes into account the heterogeneity of resources such as the capacity of vMMEs. We compare this algorithm with a random and a round-robin algorithm under two different system settings. Experimental results suggest that carefully selected load balancing algorithms can significantly reduce the control plane latency as compared to simple random or round-robin schemes.
In the fifth generation (SG) mobile networks, the number of user-plane gateways has increased, and, in contrast to previous generations they can be deployed in a decentralized way and auto-scaled independently from their control-plane functions. Moreover, the performance of the user-plane gateways can be boosted with the adoption of advanced acceleration techniques such as Vector Packet Processing (VPP). However, the increased number of user-plane gateways has also made load balancing a necessity, something we find has so far received little attention. Moreover, the introduction of VPP poses a challenge to the design of the auto-scaling of user- plane gateways. In this paper, we address these two challenges by proposing a novel performance indicator for making better auto-scaling decisions, and by proposing three new dynamic load- balancing algorithms for the user plane of a VPP-based, softwarized SG network. The novel performance indicator is estimated based on the VPP vector rate and is used as a threshold for the auto-scaling process. The dynamic load-balancing algorithms take into account the number of bearers allocated for each user-plane gateway and their VPP vector rate. We validate and evaluate our proposed solution in a SG testbed. Our experiment results show that the scaling helps to reduce the packet latency for the user-plane traffic, and that our proposed load-balancing algorithms can give a better distribution of traffic load as compared to traditional static algorithms.
In the fifth generation (5G) mobile networks, the number of user plane functions has increased, and, in contrast to previous generations. They can be deployed in a decentralized way and auto-scaled independently from their control plane functions. Moreover, the performance of the user plane functions can be boosted with the adoption of advanced acceleration techniques such as Vector Packet Processing (VPP). However, the increased number of user plane functions has also made load balancing a necessity, something we find has so far received little attention. Moreover, the introduction of VPP poses a challenge to the design of the auto-scaling of user-plane functions. In this paper, we address these two challenges by proposing a novel performance indicator for making better auto-scaling decisions, and by proposing three new dynamic load-balancing algorithms for the user plane of a VPP-based, softwarized 5G network. The novel performance indicator is estimated based on the VPP vector rate, and is used as a threshold for the auto-scaling process. The dynamic load-balancing algorithms take into account the number of bearers allocated for each user plane function and their VPP vector rate. We validated and evaluated our proposed solution in a 5G testbed. Our experiment results show that the scaling helps to reduce the packet latency for the user plane traffic, and our proposed load-balancing algorithms seem to give a better distribution of traffic load as compared to traditional static algorithms.
When designing a software module or system, a systems engineer must consider and differentiate between how the system responds to external and internal errors. External errors cannot be eliminated and must be tolerated by the system, while the number of internal errors should be minimized and the faults they result in should be detected and removed. This paper presents a development strategy based on design contracts and a case study of an industrial project in which the strategy was successfully applied. The goal of the strategy is to minimize the number of internal errors during the development of a software system while accommodating external errors. A distinction is made between weak and strong contracts. These two types of contracts are applicable to external and internal errors respectively. According to the strategy, strong contracts should be applied initially to promote the correctness of the system. Before release, the contracts governing external interfaces should be weakened and error management of external errors enabled. This transformation of a strong contract to a weak one is harmless to client modules
This paper proposes that fairness in wirelessnetworks should be measured using one of the followingnew measures: the deterministic unfairness bound called the wireless absolute fairness bound (WAFB) or the statistical unfairness bound called the 99-percentile wireless absolute fairness bound (WAFB99). Compared with previous fairness de_nitions, the new fairness measures are better suited for measuring fairness of scheduling disciplines that exploit multiuser diversity.A new scheduling discipline called opportunistic proportional fair scheduling is de_ned. Numerical results show that the new scheduling discipline has slightly higher throughput and slightly better fairness than proportional fair scheduling.
Virtualization is central to cloud computing systems. It abstracts computing resources to be shared among multiple virtual machines (VMs) that can be easily managed to run multiple applications and services. To benefit from the advantages of cloud computing, and to cope with increasing traffic demands, telecom operators have adopted cloud computing. Telecom services and applications are, however, characterized by real-time responsiveness, strict end-to-end latency, and high reliability. Due to the inherent overhead of virtualization, the network performance of applications and services can be degraded. To improve the performance of emerging applications and services that demand stringent end-to-end latency, and to understand the network performance bottleneck of virtualization, a comprehensive performance measurement and analysis is required. To this end, we conducted controlled and detailed experiments to understand the impact of virtualization on end-to-end latency and the performance of transport protocols in a virtualized environment. We also provide a packet delay breakdown in the virtualization layer which helps in the optimization of hypervisor components. Our experimental results indicate that the end-to-end latency and packet delay in the virtualization layer are increased with co-located VMs.
Internet services such as virtual reality, interactive cloud applications, and online gaming, have a strict quality of service requirements (e.g., low-latency). However, the current Internet is not able to satisfy the low-latency requirements of these applications. This as the standard TCP induces high queuing delays when used by capacity-seeking traffic, which in turn results in unpredictable latency. The Low Latency Low Loss Scalable throughput (L4S) architecture aims to address this problem by combining scalable congestion controls (e.g., DCTCP) with early congestion signaling from the network. For incremental deployment, the L4S architecture defines a Dual Queue Coupled AQM that enables the safe coexistence of scalable and classic (e.g., Reno, Cubic, etc.) flows on the global Internet. The DualPI2 AQM is a Linux kernel implementation of a Dual Queue Coupled AQM. In this paper, we benchmark the DualPI2 AQM to validate experimental result(s) reported in previous works that demonstrate the coexistence of scalable and classic congestion controls, and its low-latency service. Our results validate the coexistence of scalable and classic flows using DualPI2 single queue AQM while the result with dual queue shows neither rate nor window fairness between the flows.
The strict low-latency requirements of applications such as virtual reality, online gaming, etc., can not be satisfied by the current internet. This is due to the characteristics of classic TCP such as Reno and TCP Cubic which induce high queuing delays when used for capacity-seeking traffic, which in turn results in unpredictable latency. The Low Latency, Low Loss, Scalable throughput (L4S) architecture addresses this problem by combining scalable congestion controls such as DCTCP and TCP Prague with early congestion signaling from the network. It defines a Dual Queue Coupled (DQC) AQM that isolates low-latency traffic from the queuing delay of classic traffic while ensuring the safe co-existence of scalable and classic flows on the global Internet. In this paper, we benchmarktheDualPI2 scheduler, a reference implementation of DQC AQM, to validate some of the experimental result(s) reported in the previous works that demonstrate the co-existence of scalable and classic congestion controls and its low-latency service. Our results validate the co-existence of scalable and classic flows using DualPI2 Singlequeue (SingleQ) AQM, and queue latency isolation of scalable flows using DualPI2 Dual queue (DualQ) AQM. However, the rate or win-dow fairness between DCTCP without fair-queuing (FQ) pacing and TCP Cubic using DualPI2 DualQ AQM deviates from the original results. We attribute the difference in our results and the original results to the sensitivity of the L4S architecture to traffic bursts and the burst sending pattern of the Linux kernel.
Datacenter applications generate a mix of short and long flows, which have often contrasting network performance requirements. While short flows are typically sensitive to their completion time, long flows are more or less deadline agnostic but demand high throughput. Despite the availability of multiple, parallel high-capacity paths inside a datacenter network, the achievable transport-layer performance for both latency-sensitive and capacity-demanding applications is far from optimal. The reason is partly due to the inefficiency of transport protocols deployed inside datacenters. Existing transport protocols are either not capable of utilizing multiple paths offered by datacenter topologies, e.g., Datacenter TCP (DCTCP) or unsuitable for latency-sensitive applications, e.g., Multipath TCP (MPTCP), due to the employed congestion detection schemes. To address this problem, we have designed a coupled multipath congestion control algorithm called Multipath Datacenter TCP (MDTCP). MDTCP builds upon MPTCP and uses Explicit Congestion Notification (ECN) signals to detect and react to congestion before queues overflow as in DCTCP, offering both reduced latency and higher network utilization. The MDTCP congestion control has been implemented in the Linux kernel and in a packet- level network simulator. We evaluate MDTCP’s performance extensively both in a programmable datacenter network testbed andin large-scale simulations. The obtained results show that MDTCP outperforms DCTCP by reducing the average Flow Completion Time (FCT) by more than 1.6× at high load, and achieves similar performance as DCTCP at moderate network load. Moreover, it outperforms MPTCP by always achieving a lower average FCT. MDTCP also improves network utilization by 7% and 12% compared to MPTCP and DCTCP, respectively.
Queuing latency is one of the limiting factors to achieve the latency targets of emerging latency-sensitive Internet applications (e.g., interactive web, real-time online gaming). It occurs when large capacity-seeking traffic bloats router buffers configured to allow full link utilization of standard TCP congestion controllers (e.g., TCP Reno, Cubic). The Low Latency, Low Loss and Scalable Throughput (L4S) architecture proposes to overcome the problem by combining scalable congestion controllers (e.g., DCTCP, TCP Prague) and early congestion signaling from the network. L4S defines Dual Queue Coupled (DQC) AQM as transition mechanism enabling scalable senders to coexist with standard congestion control. This paper extends the L4S Internet service to the multipath domain by using MDTCP—a scalable multipath congestion control for Multipath TCP (MPTCP). We evaluate the performance of MDTCP in a controlled network environment mimicking the L4S Internet service architecture. Our results indicate that MDTCP and TCP Prague achieve similar flow completion time for short flows, and outperform both the non-scalable, single-path, and multipath congestion controls. MDTCP also improves multipath capacity utilization compared to the existing MPTCP congestion controllers and outperforms both TCP Prague and TCP Cubic. Although MDTCP achieves a lower FCT for medium flows than TCP Prague, it does not show improved performance than the classic CCs due to its exit from the slow-start phase. With regard to bottleneck capacity sharing, MDTCP never caused starvation when sharing a bottleneck with a single-path TCP Prague, and is not severely affected by the competing TCP Prague flows.
Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the increasing traffic volume and service demand. NFV enables virtualization of network functions that can be deployed as virtual machines on general purpose server hardware in cloud environments, effectively reducing deployment and operational costs. To benefit from the advantages of NFV, virtual network functions (VNFs) need to be provisioned with sufficient resources and perform without impacting network quality of service (QoS). To this end, this paper proposes a model for VNFs placement and provisioning optimization while guaranteeing the latency requirements of the service chains. Our goal is to optimize resource utilization in order to reduce cost satisfying the QoS such as end- to-end latency. We extend a related VNFs placement optimization with a fine-grained latency model including virtualization overhead. The model is evaluated with a simulated network and it provides placement solutions ensuring the required QoS guarantees.
A packet switched wireless cellular system with wide area coverage and high throughput is proposed. It is designed to be cost effective and to provide high spectral efficiency. The high performance is achieved by the use of long term channel predictions, in both time and frequency, scheduling among users, and smart antennas combined with adaptive modulation and power control. Calculations of the spectral efficiency of the downlink, based on reasonable simplifying assumptions, indicate that a tremendous capacity can be attained for moderate numbers of users and terminal antennas. We also briefly discuss other means for performance improvements such as alternatives to standard TCP, interlayer interaction/communication, and the use of positioning information.
A packet switched wireless cellular system with wide area coverage, high throughput and high spectral efficiency is proposed. Smart antennas at both base stations and mobiles improve the antenna gain and improve the signal to interference ratio. The small-scale fading is predicted in both time and frequency and a slotted OFDM radio interface is used, in which time-frequency bins are allocated adaptively to different mobile users, based on their predicted channel quality. This enables efficient scheduling among sectors and users as well as fast adaptive modulation and power control. We here estimate the spectral efficiency of the suggested downlink. The resulting channel capacity grows with the number of simultaneous users and with the number of antenna elements in terminals. A high efficiency, around 4 bits/s/Hz, is attained already for moderate numbers of users and terminal antennas. An outline is given of research pursued within the PCC Wireless IP Project to improve and investigate this type of system.
The promises of multipath transport is to aggregate bandwidth and improve resource utilisation and reliability. We demonstrate in this paper that the way multipath coupled congestion control is defined today RFC6359 leads to a sub-optimal resource utilisation when network paths are mainly disjoint, i.e., they do not share a bottleneck. With growing interest to standardise Multipath QUIC (MPQUIC), we implement the practical shared bottleneck detection (SBD) algorithm from RFC8382 in MPQUIC, namely MPQUIC-SBD. We evaluate MPQUIC-SBD through extensive emulation experiments in the context of video streaming. We show that MPQUIC-SBD is able to correctly detect shared bottlenecks over 90% of the time as the video segments’ size increase depending on the Adaptive Bitrate (ABR) algorithm. In non-shared bottleneck scenarios, MPQUIC-SBD results in video throughput gains of more than 13% compared to MPQUIC, which directly translates into better video quality metrics.
Concerns have been raised in the past several years that introducing new transport protocols on the Internet has be- come increasingly difficult, not least because there is no agreed-upon way for a source end host to find out if a trans- port protocol is supported all the way to a destination peer. A solution to a similar problem—finding out support for IPv6—has been proposed and is currently being deployed: the Happy Eyeballs (HE) mechanism. HE has also been proposed as an efficient way for an application to select an appropriate transport protocol. Still, there are few, if any, performance evaluations of transport HE. This paper demonstrates that transport HE could indeed be a feasible solution to the transport support problem. The paper evaluates HE between TCP and SCTP using TLS encrypted and unencrypted traffic, and shows that although there is indeed a cost in terms of CPU load to introduce HE, the cost is rel- atively small, especially in comparison with the cost of using TLS encryption. Moreover, our results suggest that HE has a marginal impact on memory usage. Finally, by introduc- ing caching of previous connection attempts, the additional cost of transport HE could be significantly reduced.
It is widely recognized that the Internet transport layer has become ossified, where further evolution has become hard or even impossible. This is a direct consequence of the ubiquitous deployment of middleboxes that hamper the deployment of new transports, aggravated further by the limited flexibility of the application programming interface (API) typically presented to applications. To tackle this problem, a wide range of solutions have been proposed in the literature, each aiming to address a particular aspect. Yet, no single proposal has emerged that is able to enable evolution of the transport layer. In this paper, after an overview of the main issues and reasons for transport-layer ossification, we survey proposed solutions and discuss their potential and limitations. The survey is divided into five parts, each covering a set of point solutions for a different facet of the problem space: (1) designing middlebox-proof transports; (2) signaling for facilitating middlebox traversal; (3) enhancing the API between the applications and the transport layer; (4) discovering and exploiting end-to-end capabilities; and (5) enabling user-space protocol stacks. Based on this analysis, we then identify further development needs toward an overall solution. We argue that the development of a comprehensive transport layer framework, able to facilitate the integration and cooperation of specialized solutions in an application-independent and flexible way, is a necessary step toward making the Internet transport architecture truly evolvable. To this end, we identify the requirements for such a framework and provide insights for its development
The unprecedented growth of user generated con- tents yielded by the proliferation of social networks applications, cellular based video surveillance and device-to-device (D2D) communication, makes the cellular uplink communication an attractive topic. In this paper we conduct a systematic evaluation and measurement analysis to characterize cellular uplink traffic and compare its interplay with different TCP congestion control algorithms (CCA), namely NewReno, Cubic, and BBR, in both stationary and mobility scenarios. The evaluation encompasses average throughput, average round trip time (RTT), fairness among simultaneous flows, and packet retransmission. The in- tended behavior of BBR has been observed in LTE uplink, but some severe issues such as lack of fairness among simultaneous flows and massive on device packet losses have been observed. It is observed that the lack of fairness among simultaneous flows can unpredictably change the throughput of multi-flow applications.