The thing about ChatGPT (and its siblings like GPT3) is that it lies, prolifically ... it does say an awful lot of things that are untrue, and mixes them together with things that are true, and out comes an authoritative-sounding mixture of the two that nobody should really trust.
If I can get the gist of a C++ class written by an engineer who left the firm a year ago and understand it in 30 seconds, what would otherwise take me 30 minutes - I’d call that a win.
Integrating that basic understanding into the code base in one second is another big win.
If I can get two quick hypothetical ways that same class might fail in 10 more seconds - it’s a win because programmers are terrible about hypothesizing all the ways code could fail - giant leap.
Large Language Models (the core tech in ChatGPT) are already being used to help computer programmers (who know how to debug errors when they see them) ...