Caching the contents of unmanned aerial vehicles (UAVs) could significantly improve the content fetching performance of request users (RUs). In this paper, we study UAV trajectory design, content fetching, power allocation, and content placement problems in multi-UAV-aided networks, where multiple UAVs can transmit contents to the assigned RUs. To minimize the energy consumption of the system, we develop a constrained optimization problem that simultaneously designs UAV trajectory, power allocation, content fetching, and content placement. Since the original minimization problem is a mixed-integer nonlinear programming (MINLP) problem that is difficult to solve, the optimization problem was first transformed into a semi-Markov decision process (SMDP). Next, we developed a new technique to solve the joint optimization problem: option-based hierarchical deep reinforcement learning (OHDRL). We define UAV trajectory planning and power allocation as the low-level action space and content placement and content fetching as the high-level option space. Stochastic optimization can be handled using this strategy, where the agent makes high-level option selections, and the action is carried out at a low level based on the chosen option's policy. When comparing the proposed approach to the current technique, the numerical results show that it can produce more consistent learning performance and reduced energy consumption.