Cellular Internet of Things (CIoT) is a Low-Power Wide-Area Network (LPWAN) technology. It aims for cheap, lowcomplexity IoT devices that enable large-scale deployments and wide-area coverage. Moreover, to make large-scale deployments of CIoT devices in remote and hard-to-access locations possible, a long device battery life is one of the main objectives of these devices. To this end, 3GPP has defined several energysaving mechanisms for CIoT technologies, not least for the Narrow-Band Internet of Things (NB-IoT) technology, one of the major CIoT technologies. Examples of mechanisms defined include CONNECTED-mode DRX (cDRX), Release Assistance Indicator (RAI), and Power Saving Mode (PSM). This paper considers the impact of the essential energy-saving mechanisms on minimizing the energy consumption of NB-IoT devices, especially the cDRX and RAI mechanisms. The paper uses a purpose-built NB-IoT simulator that has been tested in terms of its built-in energy-saving mechanisms and validated with realworld NB-IoT measurements. The simulated results show that it is possible to save 70%-90% in energy consumption by enabling the cDRX and RAI. In fact, the results suggest that a battery life of 10 years is only achievable provided the cDRX, RAI, and PSM energy-saving mechanisms are correctly configured and used
This paper studies the impact of tunable parametersin the NB-IoT stack on the energy consumption of a user equipment(UE), e.g., a wireless sensor. NB-IoT is designed to enablemassive machine-type communications for UE while providing abattery lifetime of up to 10 years. To save battery power, most oftime the UE is in dormant state and unreachable. Still, duringthe CONNECTED and IDLE state, correct tuning of criticalparameters, like Discontinuous reception (DRX), and extendedDiscontinuous reception (eDRX), respectively, are essential to savebattery power. Moreover, the DRX and eDRX actions relate tovarious parameters which are needed to be tuned in order toachieve a required UE battery lifetime. The objective of thispaper is to observe the influence of an appropriate tuning ofthese parameters to reduce the risk of an early battery drainage
In this paper, we study the energy consumptionof Narrowband IoT devices. The paper suggests that key tosaving energy for NB-IoT devices is the usage of full Discontinuous Reception (DRX), including the use of connected-mode DRX (cDRX): In some cases, cDRX reduced the energy consumption over a 10-year period with as much as 50%. However, the paper also suggests that tunable parameters, such as the inactivity timer, do have a significant impact. On the basis of our findings, guidelines are provided on how to tune the NB-IoT device so that it meets the target of the 3GPP, i.e., a 5-Wh battery should last for at least 10 years. It is further evident from our results that the energy consumption is largely dependent on the intensity and burstiness of the traffic, and thus could be significantly reduced if data is sent in bursts with less intensity,irrespective of cDRX support.
The Cellular Internet of Things (CIoT), a new paradigm, paves the way for a large-scale deployment of IoT devices. CIoT promises enhanced coverage and massive deployment of low-cost IoT devices with an expected battery life of up to 10 years. However, such a long battery life can only be achieved provided the CIoT device is configured with energy efficiency in mind. This paper conducts a comprehensive survey on energy-saving solutions in 3GPP-based CIoT networks. In comparison to current studies, the contribution of this paper is the classification and an extensive analysis of existing energy-saving solutions for CIoT, e.g., the configuration of particular parameter values and software modifications of transport- or radio-layer protocols, while also stressing key parameters impacting the energy consumption such as the frequency of data reporting, discontinuous reception cycles (DRX), and Radio Resource Control (RRC) timers. In addition, we discuss shortcomings, limitations, and possible opportunities which can be investigated in the future to reduce the energy consumption of CIoT devices.
Connected vehicles can make roads traffic safer and more efficient, but require the mobile networks to handle time-critical applications. Using the MONROE mobile broadband measurement testbed we conduct a multi-access measurement study on buses. The objective is to understand what network performance connected vehicles can expect in today's mobile networks, in terms of transaction times and availability. The goal is also to understand to what extent access to several operators in parallel can improve communication performance. In our measurement experiments we repeatedly transfer warning messages from moving buses to a stationary server. We triplicate the messages and always perform three transactions in parallel over three different cellular operators. This creates a dataset with which we can compare the operators in an objective way and with which we can study the potential for multi-access. In this paper we use the triple-access dataset to evaluate single-access selection strategies, where one operator is chosen for each transaction. We show that if we have access to three operators and for each transaction choose the operator with best access technology and best signal quality then we can significantly improve availability and transaction times compared to the individual operators. The median transaction time improves with 6% compared to the best single operator and with 61% compared to the worst single operator. The 90-percentile transaction time improves with 23% compared to the best single operator and with 65% compared to the worst single operator.
Encryption on the Internet is as pervasive as ever. This hasprotected communications and enhanced the privacy of users. Unfortu-nately, at the same time malware is also increasingly using encryptionto hide its operation. The detection of such encrypted malware is cru-cial, but the traditional detection solutions assume access to payloaddata. To overcome this limitation, such solutions employ traffic decryp-tion strategies that have severe drawbacks. This paper studies the usageof encryption for malicious and benign purposes using large datasets andproposes a machine learning based solution to detect malware using con-nection and TLS metadata without any decryption. The classification isshown to be highly accurate with high precision and recall rates by usinga small number of features. Furthermore, we consider the deployment as-pects of the solution and discuss different strategies to reduce the falsepositive rate.
Levenshtein distance is well known for its use in comparing two strings for similarity. However, the set of considered edit operations used when comparing can be reduced in a number of situations. In such cases, the application of the generic Levenshtein distance can result in degraded detection and computational performance. Other metrics in the literature enable limiting the considered edit operations to a smaller subset. However, the possibility where a difference can only result from deleted bytes is not yet explored. To this end, we propose an insert-only variation of the Levenshtein distance to enable comparison of two strings for the case in which differences occur only because of missing bytes. The proposed distance metric is named slice distance and is formally presented and its computational complexity is discussed. We also provide a discussion of the potential security applications of the slice distance.
Traditional security mechanisms such as signature basedintrusion detection systems (IDSs) attempt to find a perfect match of aset of signatures in network traffic. Such IDSs depend on the availabilityof a complete application data stream. With emerging protocols such asMultipath TCP (MPTCP), this precondition cannot be ensured, result-ing in false negatives and IDS evasion. On the other hand, if approximatesignature matching is used instead in an IDS, a potentially high numberof false positives make the detection impractical. In this paper, we showthat, by using a specially tailored partial signature matcher and knowl-edge about MPTCP semantics, the Snort3 IDS can be empowered withpartial signature detection. Additionally, we uncover the type of Snort3rules suitable for the task of partial matching. Experimental results withthese rules show a low false positive rate for benign traffic and highdetection coverage for attack traffic.
Multipath TCP (MPTCP) is a proposed extension to TCP that enables a number of performance advantages that have not been offered before. While the protocol specification is close to being finalized, there still remain some unaddressed challenges regarding the deployment and security implications of the protocol. This work attempts to tackle some of these concerns by proposing and implementing MPTCP aware security services and deploying them inside a proof of concept MPTCP proxy. The aim is to enable hosts, even those without native MPTCP support, to securely benefit from the MPTCP performance advantages. Our evaluations show that the security services that are implemented enable proper intrusion detection and prevention to thwart potential attacks as well as threshold rules to prevent denial of service (DoS) attacks.
We present the latency-aware multipath schedulerZQTRTT that takes advantage of the multipath opportunities ininformation-centric networking. The goal of the scheduler is touse the (single) lowest latency path for transaction-oriented flows,and use multiple paths for bulk data flows. A new estimatorcalled zero queue time ratio is used for scheduling over multiplepaths. The objective is to distribute the flow over the paths sothat the zero queue time ratio is equal on the paths, that is,so that each path is ‘pushed’ equally hard by the flow withoutcreating unwanted queueing. We make an initial evaluation usingsimulation that shows that the scheduler meets our objectives.
Information-centric networking (ICN) has been introduced as a potential future networking architecture. ICN promises an architecture that makes information independent from lo- cation, application, storage, and transportation. Still, it is not without challenges. Notably, there are several outstanding issues regarding congestion control: Since ICN is more or less oblivious to the location of information, it opens up for a single application flow to have several sources, something which blurs the notion of transport flows, and makes it very difficult to employ traditional end-to-end congestion control schemes in these networks. Instead, ICN networks often make use of hop-by-hop congestion control schemes. How- ever, these schemes are also tainted with problems, e.g., several of the proposed ICN congestion controls assume fixed link capacities that are known beforehand. Since this seldom is the case, this paper evaluates the consequences in terms of latency, throughput, and link usage, variable link capacities have on a hop-by-hop congestion control scheme, such as the one employed by the Multipath-aware ICN Rate-based Congestion Control (MIRCC). The evaluation was carried out in the OMNeT++ simulator, and demonstrates how seemingly small variations in link capacity significantly deterio- rate both latency and throughput, and often result in inefficient network link usage.
Information-centric networking (ICN) with its design around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Still, despite its attractiveness, there are many open questions that need to be answered before wireless ICN becomes a reality, not least about its congestion control: Many of the proposed hop-by-hop congestion control schemes assume a fixed and known link capacity, something that rarely – if ever – holds true for wireless links. As a first step, this paper demonstrates that although these congestion control schemes are able to fairly well utilise the available wireless link capacity, they greatly fail to keep the link delay down. In fact, they essentially offer the same link delay as in the case with no hop-by-hop, only end- to-end, congestion control. Secondly, the paper shows that by complementing these congestion control schemes with an easy- to-implement, packet-train link estimator, we reduce the link delay to a level significantly lower than what is obtained with only end-to-end congestion control, while still being able to keep the link utilisation at a high level.
This document describes the design and implementation of the 5GENESIS Monitoring & Analytics (M&A) framework in its Release B, developed within Task T3.3 of the project work plan. M&A Release B leverages and extends M&A Release A, which has been documented in the previous Deliverable D3.5 [1]. In particular, we present new features and enhancements introduced in this new Release compared to the Release A. We also report some examples of usage of the M&A framework, in order to showcase its integrated in the 5GENESIS Reference Architecture.
This demo presents the MONROE distributed platform and how it can be used to implement measurement and assessment experiments with operational mobile broadband networks (MBBs). MONROE provides registered experimenters with open access to hundreds of nodes, distributed over several European countries and equipped with multiple MBB connections, and a backend system that collects the measurement results. Experiments are scheduled through a user-friendly web client, with no need to directly access the nodes. The platform further embeds tools for real-time traffic flow analysis and a powerful visualization tool.
Open experimentation with operational Mobile Broadband (MBB) networks in the wild is currently a fundamental requirement of the research community in its endeavor to address the need of innovative solutions for mobile communications. Even more, there is a strong need for objective data about stability and performance of MBB (e.g., 3G/4G) networks, and for tools that rigorously and scientifically assess their status. In this paper, we introduce the MONROE measurement platform: an open access and flexible hardware-based platform for measurements and custom experimentation on operational MBB networks. The MONROE platform enables accurate, realistic and meaningful assessment of the performance and reliability of 11 MBB networks in Europe. We report on our experience designing, implementing and testing the solution we propose for the platform. We detail the challenges we overcame while building and testing the MONROE testbed and argue our design and implementation choices accordingly. We describe and exemplify the capabilities of the platform and the wide variety of experiments that external users already perform using the system
Mobile broadband (MBB) networks underpin numerous vital operations of the society and are arguably becoming the most important piece of the communications infrastructure. In this demo paper, our goal is to showcase the potential of a novel multi-homed MBB platform for measuring, monitoring and assessing the performance of MBB services in an objective manner. Our platform, MONROE, is composed of hundreds of nodes scattered over four European countries and a backend system that collects the measurement results. Through a user-friendly web client, the experimenters can schedule and deploy their experiments. The platform further embeds traffic analysis tools for real-time traffic flow analysis and a powerful visualization tool.
This paper presents a technique to improve the performance of TCP and the utilization of wireless networks.Wireless links exhibit high rates of bit errors, compared to communication over wireline or fiber. Since TCP cannotseparate packet losses due to bit errors versus congestion,all losses are treated as signs of congestion and congestionavoidance is initiated. This paper explores the possibility of accepting TCP packets with an erroneous checksum, toimprove network performance for those applications that can tolerate bit errors. Since errors may be in the TCP header aswell as the payload, the possibility of recovering the headeris discussed. An algorithm for this recovery is also presented.Experiments with an implementation have been performed,which show that large improvements in throughput can beachieved, depending on link and error characteristics.
This paper presents a wireless link and networkemulator, based upon the "Wireless IP" 4G system proposalfrom Uppsala University and partners. In wireless fading down-links (base to terminals) link-level frames are scheduled andthe transmission is adapted on a fast time scale. With fastlink adaptation and fast link level retransmission, the fading properties of wireless links can to a large extent be counteractedat the physical and link layers. A purpose of the emulatoris to investigate the resulting interaction with transport layer protocols. The emulator is built on Internet technologies, andis installed as a gateway between communicating hosts. The paper gives an overview of the emulator design, and presentspreliminary experiments with three different TCP variants. The results illustrate the functionality of the emulator by showing theeffect of changing link layer parameters on the different TCP variants.
This paper presents results from an experimental study of TCP in a wireless 4G evaluation system. Test-bed results on transport layer performance are presented and analyzed in relation to several link layer aspects. The aspects investigated are the impact of channel prediction errors, channel scheduling, delay, and adaptive modulation switch level, on TCP performance. The paper contributes a cross-layer analysis of the interaction between symbol modulation levels, different scheduling strategies, channel prediction errors and the resulting frame retransmissions effect on TCP. The paper also shows that highly persistent ARQ with fast link retransmissions do not interact negatively with the TCP retransmission timer even for short round trip delays.
This paper presents a wireless link and network emulator,along with experiments and validation against the "Wireless IP" 4G system proposal from Uppsala University and partners. In wireless fading downlinks (base to terminals) link-level frames are scheduled and the transmission is adapted on a fast time scale. With fast link adaptation and fast link level retransmission, the fading properties of wireless links can to a large extent be counteracted at thephysical and link layers. The emulator has been used to experimentally investigate the resulting interaction between the transport layer and the link layer. The paper gives an overview of the emulator design, and presents experimental results with three different TCP variants in combination with various link layer characteristics.
The performance of applications in wireless networks is partly dependent upon the link configuration. Link characteristics varies with frame retransmission persistency, link frame retransmission delay, adaptive modulation strategies, coding, and more. The link configuration and channel conditions can lead to packet loss, delay and delay variations, which impact different applications in different ways. A bulk transfer application may tolerate delays to a large extent, while packet loss is undesirable. On the other hand, real-time interactive applications are sensitive to delay and delay variations, but may tolerate packet loss to a certain extent. This paper contributes a study of the effect of link frame retransmission persistency and delay on packet loss and latency for real-time interactive applications. The results indicate that a reliable retransmission mechanism with fast link retransmissions in the range of 2-8 ms is sufficient to provide an upper delay bound of 50 ms over the wireless link, which is well within the delay budget of voice over IP applications.
This paper presents a wireless link and network emulator for the "Wireless IP" 4G system proposal from Uppsala University and partners. In wireless fading downlinks (base to terminals) link-level frames are scheduled and the transmission is adapted on a fast time scale. With fast link adaptation and fast link level retransmission, the fading properties of wireless links can to a large extent be counteracted at the physical and link layers. The emulator has been used to experimentally investigate the resulting interaction between the transport layer and the physical/link layer in such a downlink. The paper introduces the Wireless IP system, describes the emulator design and implementation, and presents experimental results with TCP in combination with various physical/link layer parameters. The impact of link layer ARQ persistency, adaptive modulation, prediction errors and simple scheduling are all considered.
The existence of excessively large and too filled network buffers, known as bufferbloat, has recently gained attention as a major performance problem for delay-sensitive applications. One important network scenario where bufferbloat may occur is cellular networks.
This paper investigates the interaction between TCP congestion control and buffering in cellular networks. Extensive measurements have been performed in commercial 3G, 3.5G and 4G cellular networks, with a mix of long and short TCP flows using the CUBIC, NewReno and Westwood+ congestion control algorithms. The results show that the completion times of short flows increase significantly when concurrent long flow traffic is introduced. This is caused by increased buffer occupancy from the long flows. In addition, for 3G and 3.5G the completion times are shown to depend significantly on the congestion control algorithms used for the background flows, with CUBIC leading to significantly larger completion times.
The successful rollout of fifth-generation (5G) networks requires a full understanding of the behavior of the propagation channel, taking into account the signal formats and the frequencies standardized by the Third Generation Partnership Project (3GPP). In the past, channel characterization for 5G has been addressed mainly based on the measurements performed on dedicated links in experimental setups. This paper presents a state-of-the-art contribution to the characterization of the outdoor-to-indoor radio channel in the 3.5 GHz band, based on experimental data for commercial, deployed 5G networks, collected during a large scale measurement campaign carried out in the city of Rome, Italy. The analysis presented in this work focuses on downlink, outdoor-to-indoor propagation for two operators adopting two different beamforming strategies, single wide-beam and multiple synchronization signal blocks (SSB) based beamforming; it is indeed the first contribution studying the impact of beamforming strategy in real 5G networks. The time and power-related channel characteristics, i.e., mean excess delay and Root Mean Square (RMS) delay spread, path loss, and K-factor are studied for the two operators in multiple measurement locations. The analysis of time and power-related parameters is supported and extended by a correlation analysis between each pair of parameters. The results show that beamforming strategy has a marked impact on propagation. A single wide-beam transmission leads, in fact, to lower RMS delay spread and lower mean excess delay compared to a multiple SSB-based transmission strategy. In addition, the single wide-beam transmission system is characterized by a smaller path loss and a higher K-factor, suggesting that the adoption of a multiple SSB-based transmission strategy may have a negative impact on downlink performance.
Understanding radio propagation characteristics and developing channel models is fundamental to building and operating wireless communication systems. Among others uses, channel characterization and modeling can be used for coverage and performance analysis and prediction. Within this context, this paper describes a comprehensive dataset of channel measurements performed to analyze outdoor-to-indoor propagation characteristics in the mid-band spectrum identified for the operation of 5th Generation (5G) cellular systems. Previous efforts to analyze outdoor-to-indoor propagation characteristics in this band were made by using measurements collected on dedicated, mostly single-link setups. Hence, measurements performed on deployed and operational 5G networks still lack in the literature. To fill this gap, this paper presents a dataset of measurements performed over commercial 5G networks. In particular, the dataset includes measurements of channel power delay profiles from two 5G networks in Band n78, i.e., 3.3–3.8 GHz. Such measurements were collected at multiple locations in a large office building in the city of Rome, Italy by using the Rohde & Schwarz (R&S) TSMA6 network scanner during several weeks in 2020 and 2021. A primary goal of the dataset is to provide an opportunity for researchers to investigate a large set of 5G channel measurements, aiming at analyzing the corresponding propagation characteristics toward the definition and refinement of empirical channel propagation models.
Mobile nodes are typically equipped with multiple radios and can connect to multiple radio access networks (e.g. WiFi, LTE and 5G). Consequently, it is important to design mechanisms that efficiently manage multiple network interfaces for aggregating the capacity, steering of traffic flows or switching flows among multiple interfaces. While such multi-access solutions have the potential to increase the overall traffic throughput and communication reliability, the variable latencies on different access links introduce packet delay variation which has negative effect on the application quality of service and user quality of experience. In this paper, we present a new IP-compatible multipath framework for heterogeneous access networks. The framework uses Multipath Datagram Congestion Control Protocol (MP-DCCP) - a set of extensions to regular DCCP - to enable a transport connection to operate across multiple access networks, simultaneously. We present the design of the new protocol framework and show simulation and experimental testbed results that (1) demonstrate the operation of the new framework, and (2) demonstrate the ability of our solution to manage significant packet delay variation caused by the asymmetry of network paths, by applying pluggable packet scheduling or reordering algorithms.
Networked systems have recently aimed to use multiple access networks in parallel to increase resiliency, availability and capacity. However, different paths may have different latency characteristics, which may lead to out-of-order packet delivery. This may severely impact both the end-to-end application performance and the capacity utilisation of multiaccess systems. In this paper, we show that in-network support for packet reordering for multiaccess systems that are based on multiple transport layer tunnels is beneficial for several application types. Our findings are applicable to TCP and QUIC traffic in the 3GPP ATSSS context, where we use the MP-DCCP tunneling framework with a buffer-based packet reordering approach that uses a dynamic timing threshold to cope with variation of path delays over time. We demonstrate achievable performance gains for a wide range of path latency differences and end-to-end round trip times when using different in-network reordering algorithms.
The fast growth of Internet traffic, the growingimportance of cellular accesses and the escalating competitionbetween content providers and network operators result in agrowing interest in improving network performance and userexperience. In terms of network transport, different solutionsranging from tuning TCP to installing middleboxes are applied.It turns out, however, that the practical results sometimes aredisappointing and we believe that poor testing is one of thereasons for this. Indeed, many cases in the literature limittesting to the simple and rare use case of a single file download,while common and complex use cases like web browsing oftenare ignored or modelled only by considering smaller files. Tofacilitate better testing, we present a set of metrics by whichthe complexity around web pages can be characterised andthe potential for different optimisations can be estimated. Wealso derive numerical values of these metrics for a small set ofpopular web pages and study similarities and differences betweenpages with the same kind of content (newspapers, e-commerceand video) and between pages designed for the same platform(computer and smartphone).
The ongoing deployment of applications that transmit multimedia data makes it important for the Internet to better accommodate the service requirements of this type of applications. One approach is to provide a partially reliable service, i.e. a service that does not insist on recovering all but only some of the packet losses, thus providing less delay than a reliable transport service. This technical report describes a transport protocol that provides a partially reliable service. The protocol, called PRTP, is especially aimed at applications with soft real-time requirements. The report also presents performance results from a number of different experiments investigating different aspects of PRTP. The results indicate that transfer times can be significantly decreased when using PRTP as opposed to TCP when packets are lost in the network.
Mobile wireless networks constitute an indispensable part of the global Internet, and with TCP the dominating transport protocol on the Internet, it is vital that TCP works equally well over these networks as over wired ones. This paper identifies the performance dependencies by analyzing the responsiveness of TCP NewReno and TCP CUBIC when subject to bandwidth variations related to movements in different directions. The presented evaluation complements previous studies on 4G mobile networks in two important ways: It primarily focuses on the behavior of the TCP congestion control in medium- to high-velocity mobility scenarios, and it not only considers the current 4G mobile networks, but also low latency configurations that move towards the overall potential delays in 5G networks. The paper suggests that while both CUBIC and NewReno give similar goodput in scenarios where the radio channel continuously degrades, CUBIC gives a significantly better goodput in scenarios where the radio channel quality continuously increases. This is due to CUBIC probing more aggressively for additional bandwidth. Important for the design of 5G networks, the obtained results also demonstrate that very low latencies are capable of equalizing the goodput performance of different congestion control algorithms. Only in low latency scenarios that combine both large fluctuations of available bandwidths and a mobility pattern in which the radio channel quality continuously increases can some performance differences be noticed.
Mobile internet usage has significantly raised over the last decade and it is expected to grow to almost 4 billion users by 2020. Even after the great effort dedicated to the improvement of the performance, there still exist unresolved questions and problems regarding the interaction between TCP and mobile broadband technologies such as LTE. This chapter collects the behavior of distinct TCP implementation under various network conditions in different LTE deployments including to which extent the performance of TCP is capable of adapting to the rapid variability of mobile networks under different network loads, with distinct flow types, during start-up phase and in mobile scenarios at different speeds. Loss-based algorithms tend to completely fill the queue, creating huge standing queues and inducing packet losses both under stillness and mobility circumstances. On the other side delay-based variants are capable of limiting the standing queue size and decreasing the amount of packets that are dropped in the eNodeB, but they are not able under some circumstances to reach the maximum capacity. Similarly, under mobility in which the radio conditions are more challenging for TCP, the loss-based TCP implementations offer better throughput and are able to better utilize available resources than the delay-based variants do. Finally, CUBIC under highly variable circumstances usually enters congestion avoidance phase prematurely, provoking a slower and longer start-up phase due to the use of Hybrid Slow-Start mechanism. Therefore, CUBIC is unable to efficiently utilize radio resources during shorter transmission sessions.
Nowadays, more than two billion people uses the mobile internet, and it is expected to rise to almost 4 billion by 2020. Still, there is a gap in the understanding of how TCP and its many variants work over LTE. To this end, this paper evaluates the extent to which five common TCP variants, CUBIC, NewReno, Westwood+, Illinois, and CAIA Delay Gradient (CDG), are able to utilise available radio resources under hard conditions, such as during start-up and in mobile scenarios at different speeds. The paper suggests that CUBIC, due to its Hybrid Slow- Start mechanism, enters congestion avoidance prematurely, and thus experiences a prolonged start-up phase, and is unable to efficiently utilise radio resources during shorter transmission sessions. Still, CUBIC, Illinois and NewReno, i.e., the loss-based TCP implementations, offer better throughput, and are able to better utilise available resources during mobility than Westwood+ and CDG – the delay-based variants do.
TCP BBR (Bottleneck Bandwidth and Round-trip propagation time) is a new TCP variant developed at Google, and which, as of this year, is fully deployed in Googles internal WANs and used by services such as Google.com and YouTube. In contrast to other commonly used TCP variants, TCP BBR is not loss-based but model-based: It builds a model of the network path between communicating nodes in terms of bottleneck bandwidth and minimum round-trip delay and tries to operate at the point where all available bandwidth is used and the round-trip delay is at minimum. Although, TCP BBR has indeed resulted in lower latency and a more efficient usage of bandwidth in fixed networks, its performance over cellular networks is less clear. This paper studies TCP BBR in live mobile networks and through emulations, and compares its performance with TCP NewReno and TCP CUBIC, two of the most commonly used TCP variants. The results from these studies suggest that in most cases TCP BBR outperforms both TCP NewReno and TCP CUBIC, however, not so when the available bandwidth is scarce. In these cases, TCP BBR provides longer file completion times than any of the other two studied TCP variants. Moreover, competing TCP BBR flows do not share the available bandwidth in a fair way, something which, for example, shows up when shorter TCP BBR flows struggle to get its fair share from longer ones.
A scalable, flexible and reliable Analytics service has become a requirement toward building efficient Fifth Generation (5G) experimental platforms that can support a suite of end-user experiments and verticals. Our paper presents the challenges that come with designing such a service-based Analytics component, and shows how we have used it in the context of open experimental platforms in the 5GENESIS project. Our Analytics service was designed both for enabling the efficient setup and configuration of the underlying platform, and also for ensuring that it provides useful insights into the experimentation Key Performance Indicators (KPIs) toward the end-user. Thus, Analytics proved to be a useful tool across several stages, starting from ensuring correct operation during the initial phases of the network setup and continuing into the normal day-to-day experimentation. Our experiments show how the tool was used in our setup and provide information on how to apply it to different environments. The Analytics component, designed as a set of microservices that serve several goals in the analytics workflow, is also provided as open source, being part of the Open5Genesis suite.
Reproducibility is one of the key characteristics of good science, but hard to achieve for experimental disciplines like Internet measurements and networked systems. This guide provides advice to researchers, particularly those new to the field, on designing experiments so that their work is more likely to be reproducible and to serve as a foundation for follow-on work by others.
We provide measured data collected from 97 trains completing over 7000 journeys in Sweden showing that the throughput over LTE is impacted by train velocity. In order to explain these observations we assume that the underlying causes can be found in the implementation of the MIMO system into LTE Rel. 8 and the diffuse scattering of signals from ground reflections.
This paper presents an experimental evaluation carried out in an academic environment. The goal of the experiment was to compare how different methods of documenting semantic information affect software reuse. More specifically, the goal was to measure if there were any differences between the methods with regard to the time needed to implement changes to existing software. Four methods of documentation were used; executable contracts, non-executable contracts, Javadoc-style documentation and sequence diagrams. The results indicate that executable contracts demanded more time than the other three methods and that sequence diagrams and Javadoc demanded the least time