We develop a Deep Reinforcement Learning (DeepRL)-based, multi-agent algorithm to efficiently control autonomous vehicles that are typically used within the context of Wireless Sensor Networks (WSNs), in order to boost application performance. As an application example, we consider wireless acoustic sensor networks where a group of speakers move inside a room. In a traditional setup, microphones cannot move autonomously and are, e.g., located at fixed positions. We claim that autonomously moving microphones improve the application performance. To control these movements, we compare simple greedy heuristics against a DeepRL solution and show that the latter achieves best application performance. As the range of audio applications is broad and each has its own (subjective) per-formance metric, we replace those application metrics by two immediately observable ones: First, quality of information (QoI), which is used to measure the quality of sensed data (e.g., audio signal strength). Second, quality of service (QoS), which is used to measure the network's performance when forwarding data (e.g., delay). In this context, we propose two multi-agent solutions (where one agent controls one microphone) and show that they perform similarly to a single-agent solution (where one agent controls all microphones and has a global knowledge). Moreover, we show via simulations and theoretical analysis how other parameters such as the number of microphones and their speed impacts performance.