Gallery
AI Risk Board
Share
Explore

icon picker
The AI Risk Klog

A risk-centric Knowledge discovery Log of articles, blog posts, podcasts, and observations in the journey to understand and adopt AI-related technology.
Keeping up with the breakneck rate of Artificial Intelligence (AI) innovation since the first public release of ChatGPT is practically impossible while holding a steady job. Everyone is struggling to develop a reasonably accurate and stable mental picture of AI, let alone of emerging applications, risks, and opportunities.
The bias of anything included herein reflects my and interest and expertise in the domain of and risk intelligence.
CleanShot 2023-06-09 at 16.24.25@2x.png
Highlights #6: The Inevitability of Multi-modal Battlefield AI
6/14/2023
The focus of this knowledge discovery log remains that of AI applications in the intelligence analysis and decision support functions, all from the perspective of a risk JDM and mitigation professional. (I need to remind myself of this here and there as the emergence of fascinating AI-related conversations and applications only grows by the day - quickly pulling us in unrelated directions as we experiment with methods and tools with measurable value-add.)
The past couple of weeks have convinced me that an LLM-driven battlespace decision-support system appears increasingly inevitable if we consider results like that can move and act freely and purposefully in a simulated (Minecraft) or real environment. It’s an exceptional result that combines distinct tasks and tactical goals within a larger strategic objective. On the heels of and the growing data ingestion and , it seems inevitable that one such system will soon be substituting or, at minimum, complementing current military technology.
The most interesting articles and papers this week:
The Atlantic article.png
Highlights #5: Transformers, Agents, and Toxicity.
5/14/2023
The avalanche of AI-related news and innovations didn't slow down much these past two weeks. From a technical standpoint, the most consequential announcements we saw involve the increasing number and power of ChatGPT/LLMs and - along with - extending (rather, exploding) these models’ power and potential. Early discussions about , and multi-model AI (with the obvious AGI implications) are becoming increasingly real, fast.
Our holds as these remain within the classes and scenarios that have been articulated for some time. That being said, all our on this subject were (astonishingly) accurately and elegantly summed up in an .
Here’s how would summarize the article:
brown and white smoke on brown rock formation
Highlights #4: AutoGPT, Jailbreaks, ...
4/18/2023
This past week’s release of was followed - as expected and without delay - by the first (and profoundly deflating) deployment of a , and accompanied by various . None of this is surprising or beyond the scope of the and, in fact, it confirms the critical issues of innovation speed and anticipated lack of self-restraint associated with this technology.
Many of us started experimenting with either localized LLM deployment, with open-source projects like , , and , or by bridging ChatGPT and proprietary content with LangChan (). But if all that is too much for you now, you can still get a sense of what a ‘chat’ with a specific/recent public or private document feels like with , which does a terrific job.
If the concept of synthetic relationships fascinates you as well, you may want to hear this with Noam Shazeer of . If you, like us, are approaching this subject from a risk intelligence perspective, you’ll find a certain naiveté about the entire venture (especially around min. 17), which is both disarming and terrifying at the same time.
text
Highlights #3: Prompts, Search vs Chat, 3rd Party Apps,...
4/10/2023
"It's the Prompt, Stupid!" And indeed it seems AI-Whisperers and . From overnight marketplaces (, , , and so on) to Youtube classes, private training programs, and free, customizable, no-code apps like (which I found to be a great way to get some practice with structured prompting.)
One more Washington Post story about this:
The two biggest commercial races (at least, for the the next few months) are search integration and proprietary content 'ingestion'. On the former (B2C-first; how to transition from search behavior to a conversation) Bing has taken a bit of a lead on Google (despite what is probably the least imaginative UX solution of the decade), but others like and are trying to reinvent the search box, and they are worth checking out. On the latter front, the ChatGPT and , along with are likely to be carry the widest and most immediate business benefits.
five gray-and-brown metal robots
Highlights #2: Control, LLMs Limits, New/Small Models, ...
4/3/2023
Among the more frequent AI risk debates is that of control. On one extreme is the monopoly and dominance argument - one in which the number of state and commercial entities with the know-how and resources to develop and own such technology is increasingly small, and the race winner may achieve unprecedented hegemony, be it economic or geopolitical. On the other end, is the concern with unrestricted access to code, algorithms, and virtual machines capable of unimaginable harm. How the pendulum swings seems to change by the week, but if there is anyone with an opinion worth listening to is Matey Zaharia, CTO of , who in seems to suggest small models may be capable of achieving greater accuracy with smaller training sets than anticipated (hello VC, Are you listening?)
Directly related to the above, is the enduring debate about the strengths and weakness of Large Language Models (LLMs) vs Symbol Manipulation approaches, and I can't think of a more useful read than Gary Marcus' "Deep Learning Is Hitting a Wall" essay:
In this terrific Making Sense episode Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence, the limitations of Deep Learning the long-term risks of producing Artificial General Intelligence (AGI), and of course, the control problem.
ChatGPT4_capt.png
Early Analytical Experiments
4/1/2023
Can ChatGPT facilitate structured (risk) intelligence analytical efforts? By its own response, it can. Yet, by now, we know that might be just an articulate and probabilistic answer. So we're putting some of these to the test, and getting very interesting results. Before stepping into specific methods, what is most impressive is the OpenAI’s capacity to perform content analysis on the fly:
Entity extraction (people, organizations, etc.)
Relationship mapping
error
Yellow highlight = added/updated this weeks.
Selected News et al
Gizmodo
5/25/2023
arXic
5/25/2023
Washington Post
5/22/2023
Washington Post
5/22/2023
5/12/2023
arXiv
5/11/2023
AI Pathways
5/11/2023
Humane AI
5/11/2023
Wired
5/10/2023
BMJ Global Health
5/9/2023
beyond2060
5/9/2023
Politico
5/9/2023
The Verge
4/29/2023
The Atlantic
4/28/2023
The Lawfare Podcast
4/20/2023
Simon Wilson’s Blog
4/14/2023
Ezra Kline Show
4/11/2023
NY Times
4/8/2023
MIT Technology Review
4/3/2023
Pinecone
4/1/2023
Center for Humane Technolgy
3/24/2023
Future of Life Institute
3/22/2023
Making Sense Podcast
3/7/2023
NIST Artificial Intelligence Risk Management Framework
1/15/2023
IARPA
1/1/2023
UN Habitat
9/1/2022
Brace New Planet Podcast
9/12/2020
There are no rows in this table
Share
 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.