Fine-tuning for GPT-3.5 Turbo is now available, giving developers the ability to customize models for their use cases. Early tests show that a fine-tuned version of GPT-3.5 Turbo can match or outperform base GPT-4 capabilities on certain tasks. This update also enables businesses to improve model performance, format responses consistently, customize model tone, and shorten prompts. Fine-tuning with GPT-3.5 Turbo can handle 4k tokens and reduce prompt size by up to 90%. Support for fine-tuning with function calling and gpt-3.5-turbo-16k coming soon. Check out the fine-tuning guide for more information.