This article concerns the secure containment control problem for multiple autonomous aerial vehicles. The cyber attacker can manipulate control commands, resulting in containment failure in the position loop. Within a zero-sum graphical game framework, secure containment controllers and malicious attackers are regarded as game players, and the attack-defense process is recast as a min-max optimization problem. Acquiring optimal distributed secure control policies requires solving the game-related Hamilton-Jacobi-Isaacs (HJI) equations. Based on the critic-only neural network (NN) structure, the reinforcement learning (RL) method is employed in solving coupled HJI equations. The fixed-time convergence technique is introduced to improve the convergence rate of RL, and the experience replay mechanism is utilized to relax the persistence of excitation condition. The associated NN convergence and closed-loop stability are analyzed. In the attitude loop, the optimal feedback control law is obtained by solving Hamilton-Jacobi-Bellman equations using the fixed-time convergent RL method. The simulation example and the quadrotor experiment are given to show the effectiveness of the proposed scheme.