Technology News

ChatGPT-Maker OpenAI opens GPT-3.5 Turbo for fine tuning

Businesses now have the ability to customize OpenAI's AI model, GPT-3.5 Turbo, for their specific needs. Additionally, GPT-4 will also become available for fine-tuning later this fall.

OpenAI has opened up its powerful new GPT-3.5 Turbo language model for custom fine-tuning, allowing businesses to tailor the bot’s responses for their unique use cases.

Fine-tuning trains an existing artificial intelligence (AI) model on additional data provided by a customer, creating a specialized version optimized for certain tasks. Companies can now fine-tune GPT-3.5 Turbo with their own data to produce customized chatbots that sound more natural, respond reliably in the desired format, and mimic the brand’s tone of voice.

Early tests by OpenAI suggest fine-tuned versions of GPT-3.5 Turbo can match or even exceed the capabilities of GPT-4 for some focused applications. GPT-4 itself will also be opened up for fine-tuning later this fall.

How Does Fine-Tuning Help?

Fine-tuning teaches the model the ideal way to interpret and respond to queries in a specific domain by exposing it to relevant examples. This allows customers to cut down on the length of prompts needed while still getting high-quality outputs.

For instance, a software firm can fine-tune the model to reliably format, complete, and run code snippets on request. Or a bank can tailor the bot to answer customer service queries concisely while maintaining brand voice.

Data Privacy and Pricing at OpenAI

Importantly, data used to fine-tune models remains entirely owned by the customer. OpenAI confirms it is never utilized to train any other AI systems.

Pricing starts at $0.008 per 1000 training tokens, $0.012 per 1000 tokens provided as input to the customized model, and $0.016 per 1000 tokens output by the model.

Up to 4000 Tokens

GPT-3.5 Turbo can handle up to 4000 tokens for fine-tuning, double that of previous OpenAI models like davinci. This expands the potential complexity of data it can learn from.

A token represents a single unit of text in machine learning models like GPT-3.5. It can be a word, subword, punctuation mark, or other discrete textual element.

Some early testers were able to cut down prompts by up to 90% after fine-tuning the bot, significantly reducing compute costs. Fine-tuning is most powerful when combined with techniques like prompt engineering, retrieval methods, and function calls.

As AI becomes increasingly capable of human-like reasoning, fine-tuning provides the level of specialization needed for commercial viability across diverse real-world applications. Rather than relying on broad pre-trained models, businesses can now customize bots to efficiently solve niche problems just like an expert human assistant would.

Please, also have a look into : Elon Musk launches AI firm ‘xAI’ to create ChatGPT rival

OpenAI shared the news on X (formerly known as Twitter):

Related Articles

Back to top button