I told my engineers to "just use AI" once. Maybe more than once. It didn't work.
I announced it in meetings, I posted links in Slack, I put it in the team wiki. What happened? Some people nodded. Some people were quiet. And one person started using ChatGPT for everything - like, everything - and broke production twice in one month.
I went through this with three teams now. Different sizes, different tech stacks. And I learned that getting a team to actually use AI is more of a management problem than a technology problem. The tools are fine. The tools are actually really good. That's not where the problem is.
People resist for very different reasons
Some engineers are scared. They won't say it like that. They say "I don't trust the output" or "it writes bad code." Sometimes that's true. But sometimes it's because they've spent years building skills and now something feels like it might make those skills less important. That's a real feeling. You have to take it seriously.
The senior engineers are usually more skeptical in a different way. They've seen microservices, they've seen blockchain, they've seen a lot of things that were supposed to change everything. So they want proof. They want to see it actually work before they believe it. Honestly that's a good attitude. You can't just tell these people AI is amazing. You have to show them.
And then there's the third type - the people who adopt immediately and trust everything the AI gives them. They ship a lot of code fast. But they don't review it carefully. The bugs are not obvious, they hide in the output that looks good on the surface. These people worry me more than the skeptics.
You can't treat all three the same way.
Mandating tools makes things worse
I've seen companies say "everyone will use Copilot starting next quarter." What really happens is people install it, maybe use it once or twice, and then quietly ignore it. Or they use it only to say they used it. Or they start to resent it because it feels like the company is watching how they work.
You can mandate a security policy. You can't mandate how people think.
AI tools only help if you actually engage with them - you have to learn when to use them, how to write prompts, when to trust the output and when to double-check. That kind of judgment takes time and it takes a safe environment. It doesn't happen when people feel pressure.
And if your team is already a little worried about job security, telling them to use AI faster is the worst possible message. It doesn't calm them down. It confirms what they were already afraid of.
How I start the conversation now
I usually say something like: "I want us to try these tools because I want to protect the team. Teams that use AI well move faster. I want us to be that team, not the team falling behind."
That framing helps. It makes AI feel like something that protects people's careers, not threatens them. And I mean it - I really do believe that engineers who learn to use these tools well are more valuable, not less.
I also make sure this conversation is completely separate from any performance talk. No connection to velocity, no connection to promotions. Not yet. If you tie AI adoption to metrics too early, people stop being honest about how it's going. They just game the metric.
Start with the boring tasks
My rule is: low risk, real time savings. Find the tasks that are tedious and where a bad AI output is easy to catch.
Writing tests is the best place I've found to start. Especially edge cases. It's the kind of work that takes time and that most engineers don't love doing. AI is really good at it. And if the test is wrong, you see it right away. So people start to see actual value without much risk.
Documentation is another easy one. PR descriptions, commit messages, adding comments to a complicated function - none of this is exciting work, and AI does it well enough that the time saving is obvious. Engineers notice.
After a few weeks of that, then you can talk about using AI for code review, for debugging, for thinking through architecture decisions. But don't start there. Start boring.
Use the tools yourself, visibly
This is the most important thing I've learned.
If I'm asking my team to try AI tools, I have to actually use them. Not just say I use them. I share my Cursor sessions in demos. I say in meetings "I asked Claude about this tradeoff and here's what it said." I paste an AI-generated summary into a doc and I say where it came from.
When engineers see their manager doing this - really doing it, not just talking about it - something changes. It makes it normal. It also gives them permission to use AI without feeling like they're cutting corners or cheating somehow. That feeling is more common than you'd think.
Setting norms as a team
One thing that helped a lot was spending one team meeting just talking about how we want to handle AI in code review. Not rules, just norms. What do we expect from each other?
We ended up with something pretty simple: AI-generated code gets reviewed the same as any other code. If a big part of a PR came from AI, mention it - not because anyone will judge you for it, but because it helps the reviewer understand the context.
That conversation also made us talk about what "review" actually means now. You can't just skim the code and see if it looks right. You have to actually think about whether the logic is correct. We started being more explicit about this, which is probably good anyway.
Let the fast people run, and don't push the slow ones
In every team there are one or two people who jump in early. Give them space. Let them experiment. Ask them to share what they find. They become better advocates for AI than you are, because their teammates trust them more than they trust the manager.
For the people who are slower to adopt - especially the skeptical senior engineers - just leave them alone. Give them time. One of the best engineers I've worked with took four months before he started using AI tools. When he finally did, he was much more careful and precise about it than the early adopters. He knew exactly when it helped and when it didn't. Late adopters are often the best practitioners in the end.
What happened on my team
We started about a year ago. Test generation first. Then documentation. After six weeks maybe half the team was using Cursor or Copilot regularly. The people who hadn't adopted yet were mostly the senior engineers, and I didn't push them.
Around month three, one of those senior engineers came to me and said he'd been experimenting quietly for a few weeks and had some thoughts. He had very specific ideas about where AI was actually useful and where it was not. That conversation became a team discussion and it shaped how we think about this stuff now.
We never mandated anything. We never set targets around it. We just made it easy to try, talked openly about what worked and what didn't, and let people figure out their own way.
It's slower than forcing it. But it's the only way I've seen that actually works.