During last decades, cloud computing enabled centralization of computing, storage and network management through cloud data centres, cellular core networks and backbone IP networks. Unlimited computing resources and storage can be used to provide elastic cloud services to a vast range of customers from organizations encompassing hundreds of servers to resource-constrained end users. Despite numerous advantages, Cloud computing is facing increasing limitations such as remarkable communication delay, jitter and network traffic caused due to far located data centres from end users. Edge computing has emerged in order to shift the traditional cloud model towards a decentralized paradigm to deal with strict requirements of latency, mobility and localization of new systems and applications (e.g. Internet of Things (IoT)). Edge computing places the content and resources near to end users to ensure a better user experience. Edge computing is still at the stage of development and encounters many challenges such as network architecture design, fault-tolerance and distributed service management, cloud-edge interoperability and resource allocation. In this chapter, we present an overview on existing resource allocation models proposed for and leveraged in edge computing. Specifically, we explore the key concepts of resource allocation in edge computing, and investigate recent researches have been conducted in this domain along with their optimization models. Finally, we discuss challenges and open issues.