Digital subtraction angiography (DSA) is an essential diagnostic tool for analyzing and diagnosing vascular diseases. However, DSA imaging techniques based on subtraction are prone to artifacts due to misalignments between mask and contrast images caused by inevitable patient movements, hindering accurate vessel identification and surgical treatment. While various registration-based algorithms aim to correct these misalignments, they often fall short in efficiency and effectiveness. Recent deep learning (DL)-based studies aim to generate synthetic DSA images directly from contrast images, free of subtraction. However, these methods typically require clean, motion-free training data, which is challenging to acquire in clinical settings. As a result, existing DSA images often contain motion-affected artifacts, complicating the development of models for generating artifact-free images. In this work, we propose an innovative Artifact-aware DSA image generation method (AaDSA) that utilizes solely motion data to produce artifact-free DSA images without subtraction. Our method employs a Gradient Field Transformation (GFT)-based technique to create an artifact mask that identifies artifact regions in DSA images with minimal manual annotation. This artifact mask guides the training of the AaDSA model, allowing it to bypass the adverse effects of artifact regions during model training. During inference, the AaDSA model can automatically generate artifact-free DSA images from single contrast images without any human intervention. Experimental results on a real head-and-neck DSA dataset show that our approach significantly outperforms state-of-the-art methods, highlighting its potential for clinical use.