In this proof of concept study, we demonstrated how Large Language Models (LLMs) can automate the conversion of unstructured case reports into clinical ratings. By leveraging instructions from a standardized clinical rating scale and evaluating the LLM's confidence in its outputs, we aimed to refine prompting strategies and enhance reproducibility. Using this strategy and case reports of drug-induced Parkinsonism, we showed that LLM-extracted data closely align with clinical rater manual extraction, achieving an accuracy of 90%.