A Judgement Amplifier
At a recent Austin CTO Club meeting someone remarked that AI tool use amplifies the quality of the technology development team members. That is, good developers become better, and bad developers become worse. How can this be so?
AI Tool Effects #
If an AI toolkit is worse at programming than the user, one may expect it to lower the output quality for that developer, degrading their performance. And if the AI tool is better, then similarly one may expect it to raise the quality. It seems natural that training on a wide variety of coding inputs will cause AI to produce average-quality code, not bottom- or top-notch code. In light of this, should we expect regression toward the mean in team member performance?
It is possible that AI is better at programming than all the developers. That seems extremely unlikely. In my experience the tools can readily create a large volume of code, but the quality is mediocre. But supposing that the tools do improve code quality for all developers, how does that explain bad developers becoming worse?
The quality of prompts has a strong effect on the quality of the outputs from the current generation of AI tools. A natural ability to prompt well may correlate with developer quality. Regardless, I would expect training in prompt engineering plus the above regression toward the mean to decrease the differences in developer performance. I suspect there was some such training in the case at hand. This would be an obvious experiment.
Judgement #
My guess is that the quality of judgement in evaluating and applying AI outputs explains the divergent outcome quality. One factor separating good and bad developers is their relative ability to make sound decisions about architecture, code factoring, choice of libraries, testing rigor, code review, and so on. If the AI tools take care of basic blocking and tackling in the development process, then a larger fraction of the remaining human role is applying judgement in making these decisions. And so the quality of these judgements has increased effect.
Can the AI tools decrease the need for such judgement rather than merely displace it? That would require running out of higher-level decisions to pick up after delegating lower-level decisions to the AI tools. I believe we are far from a world in which that condition obtains.
Decision Speed and Quality #
If AI tools do amplify judgement, then decision speed and quality increase in value. Good deciders improve in their roles, and bad deciders become worse. Decisions will tend to have broader scope and effect. Faster and better decisions keep the AI working and pointed in the right direction.
All of this suggests two simple things to try:
- Test for role-related decision speed and quality in your hiring process.
- Work to improve decision speed and quality in your employees.
If successful, these will put people on the plus side of the ledger as AI tool users.