ICD coding, which involves assigning appropriate ICD codes to clinical notes, is essential for healthcare tasks such as health expense claims, insurance claims, and disease research. Manual ICD coding is time-consuming and prone to errors, increasing the need for automation. However, clinical notes often contain non-grammatical expressions, abbreviations, professional terms, and synonyms, making them notably noisy compared to general documents. Additionally, ICD coding faces challenges such as a broad label space and the long-tail problem, making automatic ICD coding highly challenging. Large Language Models (LLMs) have shown great potential in code extraction tasks due to their exceptional natural language understanding and information extraction capabilities. However, the unique characteristics of clinical records and ICD codes necessitate fine-tuning LLMs for optimal performance in ICD coding. In this study, we propose a novel fine-tuning framework for LLMs aimed at automatic ICD coding. Our framework introduces additional elements, including a label attention mechanism, note-relevant knowledge injection based on medical expressions, and knowledge-driven sampling to address the input token limitations of LLMs. Experiments on the MIMIC-III-50 dataset show that our framework outperforms vanilla fine-tuning in both micro and macro accuracy and F1 scores, with particularly significant improvements observed in encoder-decoder models.