OBJECTIVE: Artificial intelligence (AI) offers opportunities for managing the complexities of clinical care in the emergency department (ED), and Clinical Decision Support has been identified as a priority application. However, there is a lack of published guidance on how to rigorously develop and evaluate these tools. We sought to answer the question, "What methodological standards should be applied to the development of AI-based Clinical Decision Support tools in the ED?". METHODS: We conducted an iterative consensus-establishing activity involving a subcommittee with AI expertise followed by surveys and a live facilitated discussion with participants of the 2024 Canadian Association of Emergency Physicians Research Symposium in Saskatoon. We augmented analysis of participant feedback with large language models. RESULTS: We established 11 recommendations AI-based Clinical Decision Support development including the selection of a relevant problem and team of experts, standards of data quality and quantity, novel AI-specific reporting guidelines, and adherence to principles of ethics and privacy. We removed the recommendation regarding model interpretability from the final list due to a lack of consensus. CONCLUSION: These 11 recommendations provide guiding principles and methodological standards for emergency medicine researchers to rigorously develop AI-based Clinical Decision Support tools and for clinicians to gain knowledge and trust in using them.