Palmyra-20b and Palmyra-40b are two cutting-edge large language models (LLMs) that were finetuned and evaluated for medical language understanding tasks. By applying instruction-based fine-tuning on a custom-cur- ated medical dataset of 200,000 examples, we create novel, fine-tuned models, Palmyra-Med-20b and Palmyra-Med-40b. Performance is then measured across multiple medical knowledge datasets, including PubMedQA and MedQA. Our fine-tuned models outperform both their base counterparts and other LLMs pretrained on domain-specific knowledge. This research demonstrates the effectiveness of instruction-based fine-tuning in enhancing LLMs performance in the medical domain.
Updated 3 months ago