Context Switching in the Age of AI
I have been messing around with agentic coding helpers for a while now and have found a way of working that works for me, and yes it is lazy. But on that journey I have found ways that do not work for me, ways that a a lot of people are promising is the future. Fire off a job to Claude Code, pop over to Slack to answer a message, maybe skim a newsletter, reply to an email and by then the agent says “here’s your refactor”.... All of that while you have another agent looking through your Jira tickets summarising them to tell you what is the most important thing for you to do next.
The problem I have is that by the time I have got the response I’ve totally forgotten what problem I asked my agent to solve. There is a term for this mental whiplash. They reckon it takes about 23 minutes to regain focus after even a brief interruption , and interrupted tasks end up taking twice as long with twice as many mistakes . That’s a big problem just for letting an AI finish your tests while you check LinkedIn. But in this new AI world, this juggling act is considered a virtue. So is all this “context switching” getting worse? Or can AI actually help us keep our heads together?
For decades, success was tied to concentration. The flow state of Mihaly Csikszentmihalyi emphasises deep, uninterrupted focus . AI challenges this model. AI agents work in parallel, spawning other agents, iterating and checking results while the human oversees progress . A Forbes analysis argues that the new bottleneck is not attention but the ability to oversee many AI agents; the skill is rapid context switching . Personally I found using multiple AI agents alongside normal tools, bouncing between tasks like reading emails was exhausting .
There is no doubt that AI helpers can take the grunt work away, but they also tempt you to do more. While one agent writes code or runs tests, it’s very easy to flip over to another ticket or open a terminal just to “keep busy.” That “just one more tab” instinct leads to “K + 1” thinking — always one more thing because the bot is busy . Each extra task means your brain has to reload another mental model when you come back .
Waiting for slower models makes this worse. There is a mental overhead of remembering what you were doing when the AI responds . Even brief switches break attention and increase errors . The faster the models the less context switching, allowing you to stay focused on the problem a bit better.
Multi‑agent workflows make it worse. A LeadDev report notes that agentic coding can feel like a “slot machine”: rapid feedback loops and multiple agents encourage developers to work through nights and weekends . The piece quotes a study of over 500 developers that found a 19.6 % rise in “out‑of‑hour” commits among those using AI tools . With Developers reporting difficulty maintaining a mental model of their projects because agents produce code so quickly ; they end up context‑switching between multiple agent‑driven tasks and risk burnout.
If you’re not careful, your desktop will becomes a jumble of tabs, chats and terminals. Every time you hop from your IDE to a command‑line agent to a browser, you dump one mental model and load up another . Slow command‑line agents make it worse: you ask them to refactor a module, then your eye wanders to email or Slack, and by the time they’re done you’ve forgotten where you were . Each of those little detours chips away at your focus.
But my current model works for me and perhaps it does not save me lots of time. I feel better just creating ideas as issues as they come to me. Refine and add depth to the issues (Grooming?) with some AI assistance perhaps - or just some code scanning. Then I assign an agent to the tasks one at a time or if I am feeling like it I will do it myself. And when it is ready for a PR - I get AI to do the code review for my work while I review the code of the AI.
