Working Notes


Tool to determine the number of tokens.
The 14 location full file for Burger and Lobster was 74K tokens, basically the limit for OpenAI. It needs to be much larger AND the API calls need to be much tighter for the effort in mind.

The standard cost models with a cap of 128K will only work with very streamlined API data calls to create the windows. People are seeing drop offs in performance when the data exceeds 50% of the maximum content window so we should be targeting a cap of 64K.
The 1 million from Gemini would get us much further but then the costs need to be cut by at least 5X to make it economical.

Each API call to search the data set will need to be trimmed to the most basic data set for each evaluation to keep the token size to a minimum. It looks like the .json token sizes will be quite small relatively. Updating the hours for 100 locations might be only 60K tokens.

A full location .json payload is 5300 tokens or 24,000 characters, cost of 1 penny with Gemini 1.0 Pro (as of 2/27/24). The above example of updating hours for 100 locations might cost 12 cents to process. This can get expensive at the current rates but just a 50% cost reduction by mid-2024 makes this work nicely.



Next Steps to Move Forward with DEV team
Building API tools to call the most specific elements of the data set for locations.
All of the json packages for typical files to Google.
At least one sample react code structure for the platform to determine what is available.
Table name structures to determine if anything can be aligned closer.
Sandbox in the platform for superadmins to upload .json files to process.
Processes to email the json packets to the team to process in the sandbox.
Build out Pitch Deck Explaining the Capabilities

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.