Article

AI Didn’t Make Me Code Faster. It Made It Easier to Help.

Screenshot of Cursor Visual Editor

For a long time, I assumed my days of contributing code were behind me. Not because I lost interest, but because of how modern software teams work. Over the last few weeks, after adopting AI-assisted development toolchains, that assumption changed.

As a software director, my time is fragmented, and committing cleanly to sprint timelines is difficult. Jumping in halfway comes with risk. If I can’t follow work through end to end, I risk becoming a blocker, which is the last thing a team needs.

I shipped 20+ pull requests across multiple repositories, not because I suddenly had more time, but because the shape of contribution changed.

The impact of AI-assisted toolchains

AI assistance didn’t make me code faster; it reduced coordination and context costs so I could contribute without becoming a blocker. Tools like Cursor and convention-driven stacks enable quick, localized UI/UX polish and small fixes, while agents handle mechanical refactors that I guide and verify. This shrinks disruption windows and makes refactoring more feasible. The result is more helpful PRs that improve quality without derailing sprints or adding mental load.

Contributing without becoming a blocker

The challenge was never with writing code. It was the coordination cost that came with it.

Meaningful contribution usually requires building a mental model of a large codebase, understanding local conventions, predicting where changes might break, and staying involved long enough to see work through. That level of immersion doesn’t pair well with a leadership role.

With current AI-assisted tools, I can take on work that is small, contained, and not tied to sprint commitments. Work that can be paused and resumed without disruption.

This includes minor UI and UX improvements, small fixes that unblock polish, and refactoring work with no hard deadline. I can jump in when I have time, make real progress, and step back out without slowing the team down.

UI and UX polish as a high-leverage entry point

The area where this has been most effective isn’t core business logic. It’s UI and UX polish.

Button spacing, alignment issues, font size and weight, and responsive layout glitches are not hard problems, but they add friction. They also tend to sit at the bottom of the backlog while teams focus on shipping critical features.

With tools like Cursor, I can inspect the UI directly, adjust styles visually, or prompt simple changes such as “vertically center align this,” “make the font size smaller on this viewport,” or “change this to the primary color.”

I don’t need to reconstruct the entire codebase in my head. I can focus on making the change and moving things forward.

This works best when the stack is easy to reason about. Convention-driven systems like Tailwind and Shadcn lower the barrier further because patterns are clear and well documented. The result is that a lot of small UX improvements actually get done, not because they suddenly became more important, but because the cost of doing them dropped enough that they no longer disrupt the team.

Refactoring without derailing the team

Refactoring is work most teams agree is important, but rarely feel good about starting.

It’s difficult to estimate, it tends to surface unexpected issues, and from the outside it can look like nothing changed. Upgrading frameworks, swapping icon sets, or cleaning up patterns that quietly became anti-patterns all fall into this category.

Historically, this kind of work carried real risk. Touch too many files and you trigger cascading build errors. Start mid-stream and you step on active work. Pause too long and the codebase keeps drifting.

With AI-assisted coding, the nature of refactoring doesn’t change, but the execution does.

A lot of the mechanical work can now be handled by agents. The human role shifts to guiding the process, fixing what breaks, and verifying behavior. Because the window of disruption is shorter, the trade-offs change. Refactoring no longer has to be a multi-week event that stalls momentum.

It still requires judgment and care. It’s just more manageable now, which changes how often it actually happens.

What actually changed

This isn’t about speed. It’s about reducing context switching and mental load.

AI didn’t make me code more. It made it easier to contribute in ways that help the team without getting in the way. That’s a meaningful shift, even if it’s a quiet one.

Q&A

If AI didn’t make you code faster, what exactly changed?

The coordination and context costs dropped. Instead of building a full mental model of large codebases and staying embedded through a sprint, AI-assisted tools let me make small, localized changes without becoming a blocker. That reduced context switching and mental load, so I could contribute helpfully in short windows of time.

What kinds of work are best suited to this AI-assisted approach?

Small, contained tasks that aren’t tied to sprint commitments—especially UI/UX polish (spacing, alignment, font sizing, responsive glitches), minor fixes that unblock polish, and refactoring with no hard deadline. These can be paused and resumed without disrupting the team or derailing active feature work.

How do tools like Cursor and convention-driven stacks (Tailwind, Shadcn) help?

Cursor lets me inspect the UI directly and apply targeted changes with simple prompts like “vertically center align this,” “make the font size smaller on this viewport,” or “switch this to the primary color.” Convention-driven systems such as Tailwind and Shadcn make patterns clear and predictable, so I don’t need to reconstruct the entire codebase to make safe, local improvements. The lowered barrier means many small UX fixes actually get done.

What changed about refactoring with AI in the loop?

The nature of refactoring stays the same, but execution improves. Agents handle mechanical, repetitive changes while I guide the process, fix what breaks, and verify behavior. Because the disruption window is shorter, refactors don’t have to become multi-week efforts that stall momentum. It still requires judgment and care—just with a more manageable risk profile.

How do you avoid becoming a blocker when contributing as a leader with fragmented time?

Pick work that’s small, localized, and decoupled from sprint timelines; favor tasks that can be paused and resumed; and keep changes focused so they don’t collide with active streams. AI assistance helps contain scope, and I stay in the loop just long enough to guide and verify, then step out—resulting in helpful PRs that improve quality without slowing the team.

Elisha Terada Edited

Elisha Terada

Technical Innovation Director

As Technical Innovation Director at Fresh Consulting and co-founder of Brancher.ai (150k+ users), Elisha combines over 14 years of experience in software product development with a passion for emerging technologies. He has helped businesses create impactful digital products and guided them through the strategic adoption of tech innovations like generative AI, no-code solutions, and rapid prototyping.

Elisha’s expertise extends to working with startups, entrepreneurs, corporate teams, and independent creators. Known for his hands-on approach, he has participated in and won hackathons, including the Ben’s Bites AI Hackathon, with the goal of democratizing access to AI through no-code solutions. As an experienced solution architect and innovation director, he offers clients straightforward, actionable insights that drive growth and competitive advantage.