This study explores the impact of Large Language Models (LLMs) on external audits and their associated ethical implications. A small-scale survey was conducted with auditors from non-Big Four firms to assess their general perceptions of LLMs, followed by a qualitative evaluation of external LLMs in audit-specific tasks. In the latter, ChatGPT's responses to audit-related scenarios were assessed by experienced audit partners, who rated and commented on the outputs without knowing their source. The findings indicate that while LLMs efficiently perform routine and mundane tasks such as generating human-like responses and preparing basic audit working papers and reports, external LLMs struggle to produce comprehensive, audit-specific reports. Non-Big Four auditors recognise LLMs' time-saving potential and relevance in audit planning; however , concerns persist regarding the comprehensiveness and contextual relevance of external LLM-generated risk assessments and interpretations of auditing standards. Moreover, limitations inherent in external LLMs, such as outdated information and hallucinations, necessitate auditor oversight. Ethical concerns identified include threats to auditor objectivity, confidentiality, privacy , accountability, and intellectual property rights. The study reinforces that while LLMs can enhance audit efficiency, they should complement rather than replace auditors. Their successful integration in external audits requires prompt engineering, regulatory guidance, and auditor oversight. These findings contribute to the growing research on LLMs in auditing and provide insights for audit firms considering their adoption.