Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Reinforcement learning for autonomous vehicle movements in wireless multimedia applications
Hasso Platter Institute, Germany.
Karlstad University, Faculty of Health, Science and Technology (starting 2013), Department of Mathematics and Computer Science (from 2013).ORCID iD: 0000-0001-7547-8111
Hasso Platter Institute, Germany.
2023 (English)In: Pervasive and Mobile Computing, ISSN 1574-1192, E-ISSN 1873-1589, Vol. 92, article id 101799Article in journal (Refereed) Published
Abstract [en]

We develop a Deep Reinforcement Learning (DeepRL)-based, multi-agent algorithm to efficiently control autonomous vehicles that are typically used within the context of Wireless Sensor Networks (WSNs), in order to boost application performance. As an application example, we consider wireless acoustic sensor networks where a group of speakers move inside a room. In a traditional setup, microphones cannot move autonomously and are, e.g., located at fixed positions. We claim that autonomously moving microphones improve the application performance. To control these movements, we compare simple greedy heuristics against a DeepRL solution and show that the latter achieves best application performance. As the range of audio applications is broad and each has its own (subjective) per-formance metric, we replace those application metrics by two immediately observable ones: First, quality of information (QoI), which is used to measure the quality of sensed data (e.g., audio signal strength). Second, quality of service (QoS), which is used to measure the network's performance when forwarding data (e.g., delay). In this context, we propose two multi-agent solutions (where one agent controls one microphone) and show that they perform similarly to a single-agent solution (where one agent controls all microphones and has a global knowledge). Moreover, we show via simulations and theoretical analysis how other parameters such as the number of microphones and their speed impacts performance.

Place, publisher, year, edition, pages
Elsevier, 2023. Vol. 92, article id 101799
Keywords [en]
Wireless sensor networks, Reinforcement learning, Quality of service, Quality of information, Unmanned vehicles
National Category
Signal Processing Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kau:diva-96027DOI: 10.1016/j.pmcj.2023.101799ISI: 001010863600001OAI: oai:DiVA.org:kau-96027DiVA, id: diva2:1780590
Funder
German Research Foundation (DFG), 282835863Available from: 2023-07-06 Created: 2023-07-06 Last updated: 2023-07-06Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Ramaswamy, Arunselvan

Search in DiVA

By author/editor
Ramaswamy, Arunselvan
By organisation
Department of Mathematics and Computer Science (from 2013)
In the same journal
Pervasive and Mobile Computing
Signal ProcessingComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 203 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • apa.csl
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf