Short Answer: Perhaps not in every case.
On the Coda forum, I was recently asked about ways to make GPT models less costly. The inquirer was concerned about developing solutions with OpenAI dependencies and the costs they might incur.
This method wouldn’t be scaleable for millions of rows right? Unless openAI will let you store and train a database on their servers so that you don’t need to give the same text input every time.
Scalable is a deeply contextual definition. Is OpenAI itself scalable? That depends on how many GPUs and millions of dollars you have stacked up, right?
To answer the question, we need to put a finer point on the definition of scale and the business value of using AI for, or in a specific solution. But, it’s clear that OpenAI does allow you to upload your data to serve as the basis for a new derivative model that is likely to provide lower-cost AI solutions.
In a broad sense, imagine you have 1000 “objects” that each describe some knowledge facet in the context of a specific product. Let’s use as an example. There are about 1,000 [known] use cases for this product. But we can distill this example by simply focusing on the urban use cases which number about ~350.
If we have a list of urban use cases, we can create a finely tuned model that includes all known urban use cases and easily generate questions about each use case using GP3 itself. In fact, given a use case, we can ask GP3 to generate five questions about each use case. Armed with the question variants for each use case, we can build a training data set using three of the questions, holding back two questions as our test data set. The purpose of this work is to create a chat tool that can carry in a conversation with prospective CyberLandr buyers. We have another project that helps CyberLandr owners locate unique places to utilize Cybertruck and CyberLandr in new ways - i.e., wine country, farm tourism, deep overlanding where electricity may be scarce.
We submit the questions and answers to GP3 as the basis for our new model, and then we test that model using the test data set that we withheld from the training process. We can easily gauge the performance with confidence scores and develop the data we need to determine if more training is required.
Everything in GPT has a cost, but using a fine tuning approach is almost universally less costly than prompt-engineering your solution with preamble fixed answers. Bear in mind - if you must constantly feed GP3 discrete prompts, the costs mount up. This is why I believe fine tuning is likely one of the best ways to scale up GPT projects to a level of financial practicality.
IMPORTANT: AI models need data and lots of it. The more data, the more valuable your fine tuned model will become. But, a fine tuned model is the ideal way to build an ever increasing value in AI and it serves us well as a framework for improving the model.
Almost every integration with OpenAI [thus far] has been tactical; everyone wants an AI checkmark on their product. If your AI project is strategic, you will create workflows that harvest user experience in ways that improve the model - ergo, it needs an element of ML as much as it begins with AI.
2023 Global Technologies Corporation. All rights reserved.