I’m also happy to consult on more complex use cases for Coda AI Live. Reach out anytime. I believe there is a horizon of opportunities to build more advanced applications based on this concept.
Hey there! In this video, I'll give you a quick walkthrough of Promptology, a tool I built in Coda to help create and manage AI prompts efficiently. I'll cover the productivity challenges in the age of artificial general intelligence, the AI at work challenge, and the paradox we face with AI. I'll also dive into prompt construction and repeatability, sharing examples and tips on streamlining the workflow.
The video demonstrates the prompt workbench and how to create, test, and save prompts. Plus, I'll explain how Promptology can improve productivity and its potential for cross-team collaboration.
on how Promptology aligns with the judging criteria for the AI at Work Challenge. So, grab a cup of coffee, and let's dive into the world of Promptology!
Productivity in the Age of Artificial General Intelligence
At no time in modern history has a single skill gated our future productivity so profoundly. The ability to ...
Create, Test, and Reuse Prompts
Promptology is about one thing - facing the many challenges of AI prompt development and how to get through this gate as productively as possible. Remember how unproductive you were when you first tried to use Google Search? Prompt development is somewhat like that but more challenging to master. You may have already experienced many misfires and frustration with AI. Over time, your search queries slowly improved and became second nature. Promptology is designed to avoid some of the misfires and, hopefully, some of the frustration.
. It is a tool that existed long before the challenge was announced and has been delivering AI productivity for many months. It existed before Coda AI was in alpha testing. It originally utilized OpenAI APIs through a custom Pack.
. This version has been streamlined for open submission to the challenge, but it is no less powerful than the one used by me and my team.
As Coda users, we are quick to focus on building “the thing”. The relevance of Promptology with the challenge parallels scaling AI productivity to build many things, not just solving a specific problem with AI. The Promptology Workbench is simply the “thing” that helps you build many other “things”. For this reason, it’s essential to think about the extended productivity this tool can produce. Promptology is to AI productivity what compound interest is to Bank of America.
I certainly want to win the challenge, but I’ve already won in a big way. The Codans have ensured that we are all winners.
Coda
In my view, no product is more perfectly suited for AI prompt development and testing than
. To that end, Promptology uses almost every essential feature of Coda. From automation actions to JSON parsing, this tool leans on many of the advanced capabilities of Coda. But there are key dependencies on simple ones too. Buttons, for example, are pervasively employed in AI prompt management processes.
Coda AI
As you become more familiar with Promptology, you may begin to understand how Coda AI is central to this tool. At first glance, the assumption is that Coda AI takes the prompt answers and hands them off to the AI Block for inferencing. One and done. However, this tool uses AI to support every workflow aspect to reduce your effort.
AI’s Productivity Promise
Coda AI brings with it the prospect of effortless content generation. LLMs (large language models) deliver on this promise when we ask them to create words. But they also have a well-deserved reputation for generating words that aren’t what you might expect from an intelligent system, however artificial it may be. When the AI decides to make stuff up, we think it's hallucinating. It’s not; it’s behaving exactly as LLMs are designed to do — expound on a topic and embellish as needed.
At the heart of an AI solution is the prompt, which attempts to guide the LLM to a satisfactory output.
Ironically, we benefit greatly when LLMs exercise a degree of verbosity. But this also comes with the possibility the AI may be too exuberant, resulting in long-windedness or the prospect of it altogether abandoning reason. This is the dark side of artificial generalized intelligence (AGI). Lacking specific guidance in carefully constructed prompts, LLMs are left to generalize on their own - it’s what they do well.
The hallucination problem (also called confabulation, my preferred term, fabrication, or simply making up stuff) refers to the tendency of language models (LMs) to generate text that deviates from what’s objectively true (e.g., ChatGPT
Although confabulation is pervasive—and a no-go when factuality is required—it doesn't matter in some cases. ChatGPT is great for tasks where truthfulness isn't relevant (e.g., idea generation) or mistakes can be assessed and corrected (e.g., reformatting). When boundless creativity is central (e.g., world-building) confabulation is even welcome.
AI’s Productivity Paradox
As early adopters of AI, we have grand visions of escalating our work output. There’s no shortage of media outlets and multi-message Twitter posts that have convinced the masses that AI makes digital work a breeze. Reality check: it doesn’t.
ChatGPT and Coda AI users typically experience poor results because successful prompts are not as easy to create as you first imagine. How hard can it be? It’s just words. The reality is that it is both hard and complex, depending on the AI objective.
Two aspects of prompt development are working against us.
Prompt Construction - most of us “wing” it when building prompts.
Prompt Repeatability - most of us are inclined to build AI prompts from scratch every time.
Getting these two dimensions right for any Coda solution takes patience, new knowledge, and a little luck. I assert that ...
The vast productivity benefits of AI are initially offset and possibly entirely overshadowed by the corrosive effects of learning how to construct prompts that work to your benefit.
The very nature of prompt development may have you running in circles in the early days of your AI adventure. You’ve probably experienced this frustration with ChatGPT or Bard. It’s debilitating and often frustrating — like playing a never-ending
This is what you can expect to experience as you wade into AI. I created this visual based on tracking metrics I’m gathering in another Coda AI solution I created to manage my own content production workflows.
Prompt Development Frustration
You make a prompt; it kinda works.
You modify it; it works a little better.
Rinse-repeat many times. The output gets better but often gets worse.
You’ve forgotten what worked, and this process continues as you probe for better results.
Eventually, you adopt the prompt that worked when you reached the point of intolerance for further development.
You have no record of the attempts or a methodology for testing and improving your prompt text.
Prompt Construction
As mentioned earlier, prompt engineering is not unlike software development. And while Coda itself possesses the underlying infrastructure needed to transform prompt construction into science, this template will not begin to explore all of the possible remedies that may produce AI advantages and higher productivity. But there’s one prompt lesson that we should all learn right away.
LLMs speak before they think. The challenge is to get them to think before they speak
It is well-established that prompt-building is wrought with counter-productive issues. In Promptology, I provide a glimpse that may help you nudge AI productivity lifecycle in your favor. Many examples demonstrate how to get the LLMs to think before acting.
Developing good prompts depends on your AI objectives. However, one aspect of productive prompt construction, and indeed any AGI activity, is gated by a testing protocol. You need to frame your prompts so they can be tested faster and measured, however subjective your tests may be.
Prompt Structure
Reliable prompts that produce relatively consistent outputs generally follow a pattern that includes these components.
Role - the persona of this AI.
Task - stated clearly and definitively, explaining what you want the AI to do.
Goal - a concise statement about the final output of this prompt.
Steps - the precise steps you want the AI to follow to achieve the goal.
Rules - any additional guidelines that you want the AI’s to consider.
Promptology adheres to this prompt structure by guiding you to answer questions about each component. Coda Makers are, of course, free to re-engineer this pattern. However, this pattern used by expert prompt engineers has proven successful.
Prompt Repeatability
Building prompts for your personal and business activities is a big challenge. Thanks to Coda, capturing them in a way that leverages reuse is just as important. Prompt repeatability is a function of basic database design at its core, and Coda rises to this challenge. Saved prompts include the prompt constructed from Role, Task, Goal, Steps, and Rules components and may be copied with a
Copy Prompt
button click.
Saved prompts can also be pulled back into the Prompt Template with the
Restore Prompt
button in the Saved Prompts table. In the near future, this process will be increasingly valuable as live data becomes common in LLMs.
Prompt Workbench
Overview
The
Prompt Workbench
is not particularly magical. If you know how to use Coda, you will be delighted to know that this is a standard table with fields that provide the essential elements for constructing viable prompts, testing them, and measuring their performance. It is ideally suited and helpful when you have an idea for a prompt and need to frame some quick tests while making subtle changes.
There are three basic parts on the workbench -
Prompt Template
Prompt Outcome
Saved Prompts
Click to Enlarge
Workbench Examples
The workbench comes with about dozen example prompts in the Save Prompts table. These examples demonstrate basic prompt designs and allow you to start using the streamlined methodology for testing them on the workbench. Use the restore feature and test immediately to get a feel for the workflow.
These examples are not perfect and may actually be utterly irrelevant to your work. However, this is not about specific prompts but about creating good prompts that work well for you. This is the perfect prompt selection for you to begin to experiment with changes. You can even rename that and save as new prompts.
Create a Prompt
You can create a prompt from scratch with the
New Prompt
button. You can create a prompt from scratch with the
of 120+ examples culled from various sources. Imported prompts are parsed into Promptology’s prompt components where you can polish them quickly and test.
Streamlined Workflow
Many iterations of the workbench were created and rejected over the past three months. This template version is providing me and my team with enhanced AI prompt development productivity. Even so, it is not perfect. You may see many ways to improve what I created, but that’s the promise of Coda itself; everything is extensible.
The workflow is simple:
Start with a Prompt Template → Add Your Insights → Generate an Outcome
You can be ready to test and hone your prompts by answering just seven questions. The questions represent a template success pattern that has worked well for me. If you answer these questions, there’s a good chance your prompt will work well almost immediately.
I’m also happy to consult on more complex use cases for Coda AI Live. Reach out anytime. I believe there is a horizon of opportunities to build more advanced applications based on this concept.