The Lightv8nPnP lightweight visual positioning algorithm model has been introduced to make deep learning-based drone visual positioning algorithms more lightweight. The core objective of this research is to develop an efficient visual positioning algorithm model that can achieve accurate 3D positioning for drones. To enhance model performance, several optimizations are proposed. Firstly, to reduce the complexity of the detection head module, GhostConv is introduced into the detection head module, constructing the GDetect detection head module. Secondly, to address the issues of imbalanced sample difficulty and uneven pixel quality in our custom dataset that result in suboptimal detection performance, Wise-IoU is introduced as the model's bounding box regression loss function. Lastly, based on the characteristics of the drone aerial dataset samples, modifications are made to the YOLOv8n network structure to reduce redundant feature maps, resulting in the creation of the TrimYOLO network structure. Experimental results demonstrate that the Lightv8nPnP algorithm reduces the number of parameters and computational load compared to benchmark algorithms, achieves a detection rate of 186 frames per second, and maintains a positioning error of less than 5.5 centimeters across the X, Y, and Z axes in three-dimensional space.