Skip to content
Gallery
Sidebar
Share
Explore
Airtable Comments

icon picker
GPT Prompt Engineering Will Haunt You

This is a Tell; not a Show. Prompt engineering, also known as shaping prompts to get exactly what you want from and will eventually bite you. This post is here only so that I can say 6 months from now - I tried to warn you. The core model used by GPT-3 is text-davinci-003. GPT-4 is on the near horizon and it will be so much more advanced, the prompts you build into your applications today will be fundamentally useless with text-davinci-004.
Prompt engineering is also aptly referred to as "spell casting". You can cast spells on LLM's (large language models) quite easily to achieve favorable results. GP3, for example, glosses over intermediate steps to reach conclusions, often resulting in misleading or entirely wrong conclusions. It is especially poor at math and computations in excess of three digits. But, when you insert a simple phrase "step by step" into the prompt, it seems to get a lot smarter. In some cases, this simple prompt assertion can increase its intelligent five times. The prompts engineered for GP2 by countless beta testers and practitioners representing hundreds of thousands of hours in effort, pretty much completely fail when applied to GP3. NLP spell-casting is a brittle approach.

What Do You Think?

Show authors and sort by top voted
Add Your Thoughts
Author
Thoughts
Upvote
Downvote
There are no rows in this table
Add your sentiment
Sentiment
Author
There are no rows in this table

Pulse: How do you feel about this topic?


All sentiments (
0
)

0
submitted with average sentiment of
0
Sentiment
Reflection
Author
There are no rows in this table



Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.