Reconstructing deformable soft tissues from endoscopic videos is a critical yet challenging task. Leveraging depth priors, deformable implicit neural representations have seen significant advancements in this field. However, depth priors from pre-trained depth estimation models are often coarse, and inaccurate depth supervision can severely impair the performance of these neural networks. Moreover, existing methods overlook local similarities in input sequences, which restricts their effectiveness in capturing local details and tissue deformations. In this paper, we introduce UW-DNeRF, a novel approach utilizing neural radiance fields for high-quality reconstruction of deformable tissues. We propose an uncertainty-guided depth supervision strategy to mitigate the impact of inaccurate depth information. This strategy relaxes hard depth constraints and unlocks the potential of implicit neural representations. In addition, we design a local window-based information sharing scheme. This scheme employs local window and keyframe deformation networks to construct deformations with local awareness and enhances the model's ability to capture fine details. We demonstrate the superiority of our method over state-of-the-art approaches on synthetic and in-vivo endoscopic datasets. Code is available at: https://github.com/IRMVLab/UW-DNeRF.