Change search
Link to record
Permanent link

Direct link
BETA
Grinnemo, Karl-JohanORCID iD iconorcid.org/0000-0003-4147-9487
Publications (10 of 75) Show all publications
Hurtig, P., Grinnemo, K.-J., Brunström, A., Ferlin, S., Alay, Ö. & Kuhn, N. (2019). Low-Latency Scheduling in MPTCP. IEEE/ACM Transactions on Networking, 1, 302-315, Article ID 8584135.
Open this publication in new window or tab >>Low-Latency Scheduling in MPTCP
Show others...
2019 (English)In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 1, p. 302-315, article id 8584135Article in journal (Refereed) Published
Abstract [en]

The demand for mobile communication is continuously increasing, and mobile devices are now the communication device of choice for many people. To guarantee connectivity and performance, mobile devices are typically equipped with multiple interfaces. To this end, exploiting multiple available interfaces is also a crucial aspect of the upcoming 5G standard for reducing costs, easing network management, and providing a good user experience. Multi-path protocols, such as multi-path TCP (MPTCP), can be used to provide performance optimization through load-balancing and resilience to coverage drops and link failures, however, they do not automatically guarantee better performance. For instance, low-latency communication has been proven hard to achieve when a device has network interfaces with asymmetric capacity and delay (e.g., LTE and WLAN). For multi-path communication, the data scheduler is vital to provide low latency, since it decides over which network interface to send individual data segments. In this paper, we focus on the MPTCP scheduler with the goal of providing a good user experience for latency-sensitive applications when interface quality is asymmetric. After an initial assessment of existing scheduling algorithms, we present two novel scheduling techniques: the block estimation (BLEST) scheduler and the shortest transmission time first (STTF) scheduler. BLEST and STTF are compared with existing schedulers in both emulated and real-world environments and are shown to reduce web object transmission times with up to 51% and provide 45% faster communication for interactive applications, compared with MPTCP's default scheduler.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
asymmetric paths., low-latency, MPTCP, scheduling, Transport protocols, Mobile telecommunication systems, Scheduling algorithms, Wireless telecommunication systems, asymmetric paths, Interactive applications, Low latency, Low-latency communication, Performance optimizations, Real world environments, 5G mobile communication systems
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-71263 (URN)10.1109/TNET.2018.2884791 (DOI)000458851600022 ()2-s2.0-85058877698 (Scopus ID)
Available from: 2019-02-21 Created: 2019-02-21 Last updated: 2019-03-14Bibliographically approved
Ahlgren, B., Hurtig, P., Abrahamsson, H., Grinnemo, K.-J. & Brunström, A. (2018). ICN Congestion Control for Wireless Links. In: IEEE (Ed.), IEEE WCNC 2018 Conference Proceedings: . Paper presented at IEEE Wireless Communications and Networking Conference (WCNC) 2018, Barcelona, Spain, April 16-18 2018. New York: IEEE
Open this publication in new window or tab >>ICN Congestion Control for Wireless Links
Show others...
2018 (English)In: IEEE WCNC 2018 Conference Proceedings / [ed] IEEE, New York: IEEE, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Information-centric networking (ICN) with its design around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Still, despite its attractiveness, there are many open questions that need to be answered before wireless ICN becomes a reality, not least about its congestion control: Many of the proposed hop-by-hop congestion control schemes assume a fixed and known link capacity, something that rarely – if ever – holds true for wireless links. As a first step, this paper demonstrates that although these congestion control schemes are able to fairly well utilise the available wireless link capacity, they greatly fail to keep the link delay down. In fact, they essentially offer the same link delay as in the case with no hop-by-hop, only end- to-end, congestion control. Secondly, the paper shows that by complementing these congestion control schemes with an easy- to-implement, packet-train link estimator, we reduce the link delay to a level significantly lower than what is obtained with only end-to-end congestion control, while still being able to keep the link utilisation at a high level. 

Place, publisher, year, edition, pages
New York: IEEE, 2018
Keywords
icn, information centric networks, congestion control, wireless communication, wireless networks
National Category
Telecommunications
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-65420 (URN)10.1109/WCNC.2018.8377396 (DOI)000435542402117 ()
Conference
IEEE Wireless Communications and Networking Conference (WCNC) 2018, Barcelona, Spain, April 16-18 2018
Projects
Research Environment for Advancing Low Latency Internet (READY)
Funder
Knowledge Foundation
Available from: 2017-12-17 Created: 2017-12-17 Last updated: 2018-10-18Bibliographically approved
Oljira, D. B., Grinnemo, K.-J., Taheri, J. & Brunström, A. (2018). MDTCP: Towards a Practical Multipath Transport Protocol for Telco Cloud Datacenters. In: 9th International Conference on the Network of the Future (NOF): . Paper presented at 9th International Conference on the Network of the Future (NOF)19-21 nov 2018 (pp. 9-16). IEEE
Open this publication in new window or tab >>MDTCP: Towards a Practical Multipath Transport Protocol for Telco Cloud Datacenters
2018 (English)In: 9th International Conference on the Network of the Future (NOF), IEEE, 2018, p. 9-16Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE, 2018
Keywords
Network congestion, MPTCP, ECN, TCP, 5G, Telco cloud, NFV, latency, cloud, datacenter
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-67241 (URN)10.1109/NOF.2018.8598129 (DOI)000458801700002 ()978-1-5386-8503-7 (ISBN)
Conference
9th International Conference on the Network of the Future (NOF)19-21 nov 2018
Projects
HITS
Available from: 2018-04-30 Created: 2018-04-30 Last updated: 2019-03-14Bibliographically approved
Nguyen, V.-G., Grinnemo, K.-J., Taheri, J. & Brunström, A. (2018). On Load Balancing for a Virtual and Distributed MME in the 5G Core. In: 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC): . Paper presented at 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC).
Open this publication in new window or tab >>On Load Balancing for a Virtual and Distributed MME in the 5G Core
2018 (English)In: 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 2018Conference paper (Refereed)
Abstract [en]

In this paper, we aim at tackling the scalability problem of the Mobility Management Entity (MME), which plays a crucial role of handling control plane traffic in the current 4G Evolved Packet Core as well as the next generation mobile core, 5G. One of the solutions to this problem is to virtualize the MME by applying Network Function Virtualization principles and then deploy it as a cluster of multiple virtual MME instances (vMMEs) with a front-end load balancer. Although several designs have been proposed, most of them assume the use of simple algorithms such as random and round-robin to balance the incoming traffic without any performance assessment. To this end, we implemented a weighted round robin algorithm which takes into account the heterogeneity of resources such as the capacity of vMMEs. We compare this algorithm with a random and a round-robin algorithm under two different system settings. Experimental results suggest that carefully selected load balancing algorithms can significantly reduce the control plane latency as compared to simple random or round-robin schemes.

Keywords
5G, MME, Load Balancing, Scalability, Open5GCore
National Category
Computer Sciences Telecommunications
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-67242 (URN)10.1109/PIMRC.2018.8580693 (DOI)000457761900011 ()978-1-5386-6010-2 (ISBN)978-1-5386-6009-6 (ISBN)
Conference
2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)
Projects
HITS, 4707
Funder
Knowledge Foundation
Available from: 2018-04-30 Created: 2018-04-30 Last updated: 2019-02-21
Atxutegi, E., Liberal, F., Haile, H. K., Grinnemo, K.-J., Brunström, A. & Arvidsson, Å. (2018). On the use of TCP BBR in cellular networks. IEEE Communications Magazine (3), 172-179
Open this publication in new window or tab >>On the use of TCP BBR in cellular networks
Show others...
2018 (English)In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, no 3, p. 172-179Article in journal (Refereed) Published
Abstract [en]

TCP BBR (Bottleneck Bandwidth and Round-trip propagation time) is a new TCP variant developed at Google, and which, as of this year, is fully deployed in Googles internal WANs and used by services such as Google.com and YouTube. In contrast to other commonly used TCP variants, TCP BBR is not loss-based but model-based: It builds a model of the network path between communicating nodes in terms of bottleneck bandwidth and minimum round-trip delay and tries to operate at the point where all available bandwidth is used and the round-trip delay is at minimum. Although, TCP BBR has indeed resulted in lower latency and a more efficient usage of bandwidth in fixed networks, its performance over cellular networks is less clear. This paper studies TCP BBR in live mobile networks and through emulations, and compares its performance with TCP NewReno and TCP CUBIC, two of the most commonly used TCP variants. The results from these studies suggest that in most cases TCP BBR outperforms both TCP NewReno and TCP CUBIC, however, not so when the available bandwidth is scarce. In these cases, TCP BBR provides longer file completion times than any of the other two studied TCP variants. Moreover, competing TCP BBR flows do not share the available bandwidth in a fair way, something which, for example, shows up when shorter TCP BBR flows struggle to get its fair share from longer ones. 

Place, publisher, year, edition, pages
New York, USA: Institute of Electrical and Electronics Engineers (IEEE), 2018
Keywords
tcp, bbr, cubic, newreno, 4g, lte, mobile, congestion control, Bandwidth, Mobile telecommunication systems, Wireless networks, Available bandwidth, Bottleneck bandwidth, Cellular network, Fixed networks, Model-based OPC, Network condition, Network paths, Round trip delay, Transmission control protocol
National Category
Telecommunications
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-65339 (URN)10.1109/MCOM.2018.1700725 (DOI)2-s2.0-85044079664 (Scopus ID)
Projects
COST-IC1304
Funder
EU, Horizon 2020, IC1304
Available from: 2017-12-09 Created: 2017-12-09 Last updated: 2018-06-04Bibliographically approved
Nguyen, V.-G., Brunström, A., Grinnemo, K.-J. & Taheri, J. (2018). SDN helps velocity in Big Data (1ed.). In: Javid Taheri (Ed.), Big Data and Software Defined Networks: (pp. 207-228). London: The Institution of Engineering and Technology
Open this publication in new window or tab >>SDN helps velocity in Big Data
2018 (English)In: Big Data and Software Defined Networks / [ed] Javid Taheri, London: The Institution of Engineering and Technology , 2018, 1, p. 207-228Chapter in book (Refereed)
Abstract [en]

Currently, improving the performance of Big Data in general and velocity in particular is challenging due to the inefficiency of current network management, and the lack of coordination between the application layer and the network layer to achieve better scheduling decisions, which can improve the Big Data velocity performance. In this chapter, we discuss the role of recently emerged software defined networking (SDN) technology in helping the velocity dimension of Big Data. We start the chapter by providing a brief introduction of Big Data velocity and its characteristics and different modes of Big Data processing, followed by a brief explanation of how SDN can overcome the challenges of Big Data velocity. In the second part of the chapter, we describe in detail some proposed solutions which have applied SDN to improve Big Data performance in term of shortened processing time in different Big Data processing frameworks ranging from batch-oriented, MapReduce-based frameworks to real-time and stream-processing frameworks such as Spark and Storm. Finally, we conclude the chapter with a discussion of some open issues.

Place, publisher, year, edition, pages
London: The Institution of Engineering and Technology, 2018 Edition: 1
Keywords
Big Data; telecommunication scheduling; parallel processing; software defined networking
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-67213 (URN)10.1049/PBPC015E_ch10 (DOI)978-1-78561-304-3 (ISBN)978-1-78561-305-0 (ISBN)
Available from: 2018-04-27 Created: 2018-04-27 Last updated: 2018-06-25Bibliographically approved
Atxutegi, E., Liberal, F., Grinnemo, K.-J., Brunström, A. & Arvidsson, Å. (2018). TCP Performance over Current Cellular Access: A Comprehensive Analysis. In: Autonomous Control for a Reliable Internet of Services: Methods, Models, Approaches, Techniques, Algorithms and Tools (pp. 371-400). Springer
Open this publication in new window or tab >>TCP Performance over Current Cellular Access: A Comprehensive Analysis
Show others...
2018 (English)In: Autonomous Control for a Reliable Internet of Services: Methods, Models, Approaches, Techniques, Algorithms and Tools, Springer, 2018, p. 371-400Chapter in book (Refereed)
Abstract [en]

Mobile internet usage has significantly raised over the last decade and it is expected to grow to almost 4 billion users by 2020. Even after the great effort dedicated to the improvement of the performance, there still exist unresolved questions and problems regarding the interaction between TCP and mobile broadband technologies such as LTE. This chapter collects the behavior of distinct TCP implementation under various network conditions in different LTE deployments including to which extent the performance of TCP is capable of adapting to the rapid variability of mobile networks under different network loads, with distinct flow types, during start-up phase and in mobile scenarios at different speeds. Loss-based algorithms tend to completely fill the queue, creating huge standing queues and inducing packet losses both under stillness and mobility circumstances. On the other side delay-based variants are capable of limiting the standing queue size and decreasing the amount of packets that are dropped in the eNodeB, but they are not able under some circumstances to reach the maximum capacity. Similarly, under mobility in which the radio conditions are more challenging for TCP, the loss-based TCP implementations offer better throughput and are able to better utilize available resources than the delay-based variants do. Finally, CUBIC under highly variable circumstances usually enters congestion avoidance phase prematurely, provoking a slower and longer start-up phase due to the use of Hybrid Slow-Start mechanism. Therefore, CUBIC is unable to efficiently utilize radio resources during shorter transmission sessions.

Place, publisher, year, edition, pages
Springer, 2018
Series
Lecture Notes in Computer Science
Keywords
TCP adaptability, LTE, flow size, Slow-Start, mobility
National Category
Telecommunications
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-64631 (URN)10.1007/978-3-319-90415-3_14 (DOI)978-3-319-90414-6 (ISBN)
Projects
COST-IC1304
Funder
EU, Horizon 2020, IC1304
Available from: 2017-10-09 Created: 2017-10-09 Last updated: 2018-06-25Bibliographically approved
Eklund, J., Grinnemo, K.-J. & Brunström, A. (2018). Using multiple paths in SCTP to reduce latency for signaling traffic. Computer Communications, 129, 184-196
Open this publication in new window or tab >>Using multiple paths in SCTP to reduce latency for signaling traffic
2018 (English)In: Computer Communications, ISSN 0140-3664, E-ISSN 1873-703X, Vol. 129, p. 184-196Article in journal (Refereed) Published
Abstract [en]

The increase in traffic volumes as well as the heterogeneity in network infrastructure in the upcoming 5G cellular networks will lead to a dramatic increase in volumes of control traffic, i.e., signaling traffic, in the networks. Moreover, the increasing number of low-power devices with an on-off behavior to save energy will generate extra control traffic. These increased traffic volumes for signaling traffic, often generated as bursts of messages, will challenge the signaling application timing requirements on transmission. One of the major transport protocols deployed for signaling traffic in cellular networks is the Stream Control Transmission Protocol (SCTP), with support for multiple paths as well as for independent data flows. This paper evaluates transmission over several paths in SCTP to keep the latency low despite increasing traffic volumes. We explore different transmission strategies and find that concurrent multipath transfer over several paths will significantly reduce latency for transmission over network paths with the same or similar delay. Still, over heterogeneous paths, careful, continuous sender scheduling is crucial to keep latency low. To this end, we design and evaluate a sender scheduler that considers path characteristics as well as queuing status and data flows of different priority to make scheduling decisions. Our results indicate that by careful dynamic sender scheduling, concurrent multipath transfer could lead to reduced latency for signaling traffic irrespective of path or traffic characteristics.

Place, publisher, year, edition, pages
Elsevier, 2018
National Category
Computer Engineering Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-70258 (URN)10.1016/j.comcom.2018.07.016 (DOI)000446282200015 ()
Available from: 2018-11-22 Created: 2018-11-22 Last updated: 2019-01-31Bibliographically approved
Oljira, D. B., Grinnemo, K.-J., Taheri, J. & Brunstrom, A. (2017). A Model for QoS-Aware VNF Placement and Provisioning. In: IEEE (Ed.), Network Function Virtualization and Software Defined Networks (NFV-SDN), 2017 IEEE Conference on: . Paper presented at IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Berlin, Germany, November 2017. IEEE
Open this publication in new window or tab >>A Model for QoS-Aware VNF Placement and Provisioning
2017 (English)In: Network Function Virtualization and Software Defined Networks (NFV-SDN), 2017 IEEE Conference on / [ed] IEEE, IEEE, 2017Conference paper, Published paper (Refereed)
Abstract [en]

Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the increasing traffic volume and service demand. NFV enables virtualization of network functions that can be deployed as virtual machines on general purpose server hardware in cloud environments, effectively reducing deployment and operational costs. To benefit from the advantages of NFV, virtual network functions (VNFs) need to be provisioned with sufficient resources and perform without impacting network quality of service (QoS). To this end, this paper proposes a model for VNFs placement and provisioning optimization while guaranteeing the latency requirements of the service chains. Our goal is to optimize resource utilization in order to reduce cost satisfying the QoS such as end- to-end latency. We extend a related VNFs placement optimization with a fine-grained latency model including virtualization overhead. The model is evaluated with a simulated network and it provides placement solutions ensuring the required QoS guarantees. 

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
NFV, QoS, VNF, placement, provisioning, virtualization, network, network function virtualization
National Category
Telecommunications
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-62556 (URN)10.1109/NFV-SDN.2017.8169829 (DOI)978-1-5386-3285-7 (ISBN)978-1-5386-3286-4 (ISBN)
Conference
IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Berlin, Germany, November 2017
Projects
High Quality Networked Services in a Mobile World (HITS)
Funder
Knowledge Foundation
Available from: 2017-07-29 Created: 2017-07-29 Last updated: 2018-06-04Bibliographically approved
Oljira, D. B., Grinnemo, K.-J., Taheri, J. & Brunström, A. (2017). A Model for QoS-Aware VNF Placement and Provisioning. In: 2017 IEEE Conference On Network Function Virtualization And Software Defined Networks (Nfv-Sdn): (pp. 46-52). New York: IEEE
Open this publication in new window or tab >>A Model for QoS-Aware VNF Placement and Provisioning
2017 (English)In: 2017 IEEE Conference On Network Function Virtualization And Software Defined Networks (Nfv-Sdn), New York: IEEE, 2017, p. 46-52Chapter in book (Other academic)
Abstract [en]

Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the increasing traffic volume and service demand. NFV enables virtualization of network functions that can be deployed as virtual machines on general purpose server hardware in cloud environments, effectively reducing deployment and operational costs. To benefit from the advantages of NFV, virtual network functions (VNFs) need to be provisioned with sufficient resources and perform without impacting network quality of service (QoS). To this end, this paper proposes a model for VNFs placement and provisioning optimization while guaranteeing the latency requirements of the service chains. Our goal is to optimize resource utilization in order to reduce cost satisfying the QoS such as end-to-end latency. We extend a related VNFs placement optimization with a fine-grained latency model including virtualization overhead. The model is evaluated with a simulated network and it provides placement solutions ensuring the required QoS guarantees.

Place, publisher, year, edition, pages
New York: IEEE, 2017
Series
2017 IEEE CONFERENCE ON NETWORK FUNCTION VIRTUALIZATION AND SOFTWARE DEFINED NETWORKS (NFV-SDN)
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kau:diva-66885 (URN)000426936400007 ()978-1-5386-3285-7 (ISBN)
Available from: 2018-03-29 Created: 2018-03-29 Last updated: 2018-06-26Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4147-9487

Search in DiVA

Show all publications