Spark has emerged as one of the most widely and successfully used data analytical engine for large-scale enterprise, mainly due to its unique characteristics that facilitate computations to be scaled out in a distributed environment. This paper deals with the performance degradation due to resource contention among collocated analytical applications with different priority and dissimilar intrinsic characteristics in a shared Spark platform. We propose an auto-tuning strategy of computing resources in a distributed Spark platform for handling scenarios in which submitted analytical applications have different quality of service (QoS) requirements (e.g., latency constraints), while the interference among computing resources is considered as a key performance-limiting parameter. We compared Spark-Tuner to two widely used resource allocation heuristics in a large scale Spark cluster through extensive experimental settings across several traffic patterns with uncertain rate and application types. Experimental results show that with Spark-Tuner, the Spark engine can decrease the $p$-99 latency of high priority applications by 43% during the high-rate traffic periods, while maintaining the same level of CPU throughput across a cluster.