Keeping up with the breakneck rate of Artificial Intelligence (AI) innovation since the first public release of ChatGPT is practically impossible while holding a steady job. Everyone is struggling to develop a reasonably accurate and stable mental picture of AI, let alone of emerging applications, risks, and opportunities.
Highlights #6: The Inevitability of Multi-modal Battlefield AI
The focus of this knowledge discovery log remains that of AI applications in the intelligence analysis and decision support functions, all from the perspective of a risk JDM and mitigation professional. (I need to remind myself of this here and there as the emergence of fascinating AI-related conversations and applications only grows by the day - quickly pulling us in unrelated directions as we experiment with methods and tools with measurable value-add.)
The past couple of weeks have convinced me that an LLM-driven battlespace decision-support system appears increasingly inevitable if we consider results like
that can move and act freely and purposefully in a simulated (Minecraft) or real environment. It’s an exceptional result that combines distinct tasks and tactical goals within a larger strategic objective. On the heels of
, it seems inevitable that one such system will soon be substituting or, at minimum, complementing current military technology.
The most interesting articles and papers this week:
Highlights #5: Transformers, Agents, and Toxicity.
The avalanche of AI-related news and innovations didn't slow down much these past two weeks. From a technical standpoint, the most consequential announcements we saw involve the increasing number and power of ChatGPT/LLMs
. If you, like us, are approaching this subject from a risk intelligence perspective, you’ll find a certain naiveté about the entire venture (especially around min. 17), which is both disarming and terrifying at the same time.
Highlights #3: Prompts, Search vs Chat, 3rd Party Apps,...
"It's the Prompt, Stupid!" And indeed it seems AI-Whisperers and
The two biggest commercial races (at least, for the the next few months) are search integration and proprietary content 'ingestion'. On the former (B2C-first; how to transition from search behavior to a conversation) Bing has taken a bit of a lead on Google (despite what is probably the least imaginative UX solution of the decade), but others like
Among the more frequent AI risk debates is that of control. On one extreme is the monopoly and dominance argument - one in which the number of state and commercial entities with the know-how and resources to develop and own such technology is increasingly small, and the race winner may achieve unprecedented hegemony, be it economic or geopolitical. On the other end, is the concern with unrestricted access to code, algorithms, and virtual machines capable of unimaginable harm. How the pendulum swings seems to change by the week, but if there is anyone with an opinion worth listening to is Matey Zaharia, CTO of
seems to suggest small models may be capable of achieving greater accuracy with smaller training sets than anticipated (hello VC, Are you listening?)
Directly related to the above, is the enduring debate about the strengths and weakness of Large Language Models (LLMs) vs Symbol Manipulation approaches, and I can't think of a more useful read than Gary Marcus' "Deep Learning Is Hitting a Wall" essay:
In this terrific Making Sense episode Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence, the limitations of Deep Learning the long-term risks of producing Artificial General Intelligence (AGI), and of course, the control problem.
Can ChatGPT facilitate structured (risk) intelligence analytical efforts? By its own response, it can.
Yet, by now, we know that might be just an articulate and probabilistic answer. So we're putting some of these to the test, and getting very interesting results.
Before stepping into specific methods, what is most impressive is the OpenAI’s capacity to perform content analysis on the fly: