I Wince when Prognosticators Advise Us in Matters of AI
It’s not difficult to find someone who believes they understand GPT, its flavorful models, and how AGI (artificial general intelligence) will impact our lives, mostly for the worse. In almost every case, a deeper look into the AI skills of these seers, will reveal no hands-on experience or skills in matters of applying the latest AI platforms to actual business problems.
The thing about ChatGPT (and its siblings like GPT3) is that it lies, prolifically ... it does say an awful lot of things that are untrue, and mixes them together with things that are true, and out comes an authoritative-sounding mixture of the two that nobody should really trust.
Gary clearly has a feel for AI and some good experiences in the trenches. But, with LLMs (large language models)? I can’t see any evidence he’s actually built anything with OpenAI or taken the time to consider
beyond the early customer experience that millions of us are [seemingly] drunk on.
I get it; he’s skeptical. We certainly need a degree of skepticism for all technological movements. It helps us defend against bad ideas or poorly constructed systems that may ultimately do more harm, than good.
I’m also skeptical, but for very different reasons. I tend to focus on specific ideas like
: Making Smarter Decisions When You Don't Have All the Facts), I was excited to read it because I was 99% certain he must have drawn the world that Annie is deeply knowledgable about, into the way AI actually works. It makes bets. My onboard AI was in error. 😉
Ironically, there is so much relevance between Annie’s work and AI, it is also worthy of an entire book.
Three Years In
I tend to work in software development circles and AI has been part of my day, every day. It exists as production-ready solutions in my code development, my email compositions, and even in the half-billion highway metrics captured by one of my companies in one city last year.
I’m three years in, having spent a good deal of time in 2020 with OpenAI beta testing and more recently with other systems based on GPT products. Nothing bad or onerous is happening. The sky hasn’t fallen. Customers are receiving real-time data calling out HOV lane violations, and my personal productivity in deep work is soaring as compared to the previous decade. All because of AI.
BTW, because of AI, rockets have become reusable and even return to the factory automatically, touching down backwards, safely landing to be prepped for another flight within days. If AI is to be feared, sign me up. I have hope that one of those cost-effective space missions will result in some discoveries that will change my life, perhaps adding decades of health.
(the AI Paired Programmer that helps me all day long) are far from perfect. I can use it to create a coding disaster if I want to. GPT is like a child; you can easily convince it the sky is actually made of cotton candy. I find no solace in misleading children.
There are also plenty of naysayers who can effortlessly construct prompts that fail. Examples of AGI failure get a lot of air time because it’s good for clicks and attention. The users who thrive on developing faster and writing more and better code are not seeking attention. Historically, success in new and disruptive tech is a silent movement.
If you dig a little, you will learn that enterprises are …
Programmers can indeed look at code and figure out where it fails or produces improper outcomes. It’s time-consuming and tedious work. But this comment assumes AI in coding is singular in dimension; this is the flaw many fall into. They see a prompt, a result, and that’s it. What they miss is how we use AI to understand code, or repair code, or debug code.
Programmers also have other tools - compilers are like AI; they give us immediate feedback and prevent us from making bad mistakes, even those that may be caused by asking an LLM for guidance without careful thought or
There’s no shortage of ideas that completely disrupt a segment and which have done so with less-than-perfect performance. But that’s the definition of market disruption - it doesn’t have to do the entire job better than the human (in this case). It only needs to begin to do parts of the entire job better for the disruption to occur.