This article proposes inverse reinforcement learning (IRL) algorithms for tracking control of linear networked control systems under random state dropouts during wireless transmission. The controlled system aims to track the optimal trajectory of a target system, despite the cost function governing the target's behaviors being unknown. The problem is complicated by random state dropouts occurring in two crucial scenarios: 1) the reception of the target's state and 2) feedback of the controlled system's states. Our approach enables the controlled system to infer the target's cost function and optimal control policy, thereby facilitating effective tracking. Specifically, we develop a model-based IRL algorithm that integrates the Smith predictor for state estimation. Then, we advance a state-dropout-aware inverse Q-learning algorithm that uses solely accessible system data, eliminating the need for system models. The theoretical validity of the proposed algorithms is rigorously established, and their practical effectiveness is validated through numerical simulations.