How to Help Your Engineering Team Adopt AI Without Losing the Plot
We’ve built teams that move fast, write solid code, and take ownership. Now LLMs have joined the team, and like any tool, they can boost productivity or create chaos, depending on how they’re used.
So how do you help your team adopt AI tools like Cursor, GitHub Copilot, or Claude Code without becoming overly reliant, or worse, blind to their limits?
Let’s borrow a mental model from how senior engineers actually use LLMs day-to-day. Then we’ll talk about how you can coach your team through each stage.
The Three Modes of LLM Use
One of the best frameworks I’ve seen comes from a senior engineer’s reflection on how they personally use LLMs in development. It boils down to three core scenarios:
1. “I know what I know”: The Speed Boost
This is the sweet spot. The developer understands the domain and wants to move faster. They might use an LLM to:
Generate a validated REST API in TypeScript
Scaffold unit tests for a known module
Refactor existing code with better patterns
What’s happening here? The engineer is using AI as a productivity multiplier. They’re still in charge and they know what “good” looks like.
As a manager, encourage this by:
Asking team leads to document “safe” use cases where LLMs save time
Encouraging pair programming sessions with the AI as a junior partner
Promoting lightweight reviews of AI-generated code to catch overconfidence
2. “I know what I don’t know”: The Learning Assistant
In this case, your developer is treading into new territory: Rust, Terraform, GraphQL subscriptions, etc. They ask the LLM to write a simple working example, then pick it apart to understand how it works.
They’re learning by doing with the AI as a coach.
What’s happening here? Curiosity meets context. The developer asks good questions, then validates the AI's response through testing, docs, and discussion.
As a manager, support this by:
Encouraging use of LLMs as part of onboarding into new stacks
Creating a culture of “question everything”, including AI output
Pairing junior engineers with seniors to review AI-generated work together
3. “I don’t know what I don’t know”: The Danger Zone
This is where things can go sideways.
A developer is assigned something far outside their depth such as writing a Solidity smart contract. They prompt the LLM, copy-paste what looks plausible, and move on.
The problem? They don’t have the intuition to spot hallucinations, omissions, or security gaps. And neither does the LLM.
What’s happening here? The engineer is over-trusting a tool they don’t fully understand, in a domain they haven’t mastered. It's a recipe for silent failure.
As a manager, prevent this by:
Setting clear policies on when AI-generated code must be reviewed by domain experts
Using code review tools or CI policies to gate changes from high-risk domains
Running postmortems on AI-driven bugs to understand where oversight broke down
Turning This Into Team Practice
Here’s how to use this mental model with your team in practical ways:
1. Codify the Three Modes
Print it. Post it in Slack. Review it in standups. Help your team ask:
“Am I in the ‘know-know’, ‘know-don’t-know’, or ‘don’t-know-I-don’t-know’ zone?”
This metacognitive step builds judgment. That’s what matters more than raw output.
2. Redefine “Code Review”
AI changes the game. Instead of just looking at diffs, reviewers now need to ask:
“Was this code written by a human or an LLM?”
“Does the author understand what this code does?”
“Did we just encode a subtle hallucination?”
Encourage your team to comment not just on the “what” but the “why.”
3. Track LLM Usage Like Any Other Tool
Just as you track adoption of testing frameworks or code coverage, begin tracking:
What AI tools are being used?
For what types of tasks?
Where do they help and where do they hurt?
Treat LLMs as a new engineering competency. Reward developers who build internal guides, highlight failure modes, or share tips with the team.
4. Offer Guardrails, Not Just Green Lights
Create internal best practices such as:
“Don’t ship AI-generated code in X domain without review”
“Always add a test when using LLMs for new logic”
“Paste the prompt in the PR if code came from an LLM”
These micro-policies create clarity while preserving speed.
Your Role as the Manager
You’re not here to write the prompts. You’re here to coach your team to be:
Skeptical but curious
Fast but thoughtful
Experimental but responsible
And you need to recognize that using LLMs well is a senior skill, not a shortcut. When done right, it compounds expertise, it doesn’t replace it.


