How a medical device company with a hundred projects, a tangled permissions system, and a CTO who fought every production fire asked "are we using our tools correctly?" — when the real answer was that their tools weren't the problem.
What They Were Experiencing
A medical device company asked us to review their use of Azure DevOps. They built software for cardiac monitoring — the kind of systems where doctors rely on real-time data to make diagnoses. The company had grown significantly over the previous five years through a merger of two smaller companies, and the combined organization now had a substantial IT headcount spread across multiple product lines and locations.
They were profitable. They had paying customers. Their competitors weren't eating their lunch. By any external measure, things were working.
But the CTO had a nagging feeling. He told us: "For the size of our organization, our capacity should be better than it is." They had more people than ever, but it didn't feel like they were getting proportionally more done. Everything seemed to take longer than it should. Tracking what everyone was working on consumed enormous effort. Dependencies between teams created constant bottlenecks. Production outages came in clusters and pulled senior people — including the CTO himself — into firefighting mode.
They thought the problem might be their tooling. The question they kept asking, in one form or another, was: are we using Azure DevOps correctly?
They were asking the wrong question.
What They Thought Was Wrong
Leadership believed the core problems were:
- They weren't getting full value from Azure DevOps
- Their work item tracking and reporting needed improvement
- Their teams needed better ways to manage permissions and dashboards
- Regulatory compliance requirements were adding friction to their process
The development and program management teams believed:
- There was too much work in progress and too many shifting priorities
- Cross-team dependencies constantly delayed projects
- The right people were never available when needed
- Tracking work took almost as much effort as doing the work
Everyone was right about their piece of the puzzle. But nobody could see how all the pieces connected — or that the tooling question was a distraction from a much deeper organizational problem.
What I Actually Found
We spent the first half of November conducting interviews via video call with program managers, engineering managers, the quality and regulatory lead, the test team manager, and the CTO. We also explored the company's Azure DevOps environment in depth.
There was one significant constraint on the assessment: we requested to interview software developers and testers but those requests were not granted. We only spoke with management. As a result, our findings focused on the issues raised by leadership — which, as I'll explain, was itself a finding.
Finding #1: "Are We Using the Tools Right?" Is the Wrong Question
Every person we interviewed circled back to the same concern: "Are we using Azure DevOps correctly?" They wanted to know if their work item configurations were right. If their process template customizations were optimal. If there was a better way to organize their queries and dashboards. The question was almost an obsession.
Here's the thing: Azure DevOps is flexible enough to work however you choose to work. If you want plan-driven project management, it can do that. If you want Scrum or Kanban, it can do that too. Whether you're using it "correctly" depends entirely on what you're trying to accomplish — and that's the question nobody was asking.
The real question wasn't "are we using our tools correctly?" It was "are we delivering business value as efficiently as we could be?" And the honest answer to that question was no — not because the tools were misconfigured, but because the human processes underneath the tools were working against them.
The fixation on tools was a form of displacement. It's easier to ask "is our project management software set up right?" than to ask "is our organization structured in a way that allows work to flow?" The first question has a technical answer. The second question requires confronting how people work, how teams are organized, and how decisions get made. That's harder.
Finding #2: A Hundred Projects and Everybody Running at 99.9%
During interviews, there was a consistent sense that everyone was already at maximum capacity. Not "busy." Not "working hard." At capacity — redlined, with zero slack for anything unexpected.
One engineering manager mentioned he had approximately twenty small projects to worry about on top of his larger initiatives. The program managers were spending nearly all their time tracking work rather than thinking strategically about products. The test team manager described projects as "very fluid" — some lasting a year, some lasting weeks — with constantly shifting team compositions.
The sheer volume of simultaneous work was staggering. And because there was so much in progress, the effort required just to track what was happening had become a major job in itself. People weren't managing work so much as chasing it — trying to keep a mental model of dozens of moving pieces, racing to identify dependencies before they became surprises, frantically adjusting schedules when the inevitable collisions happened.
The CTO's instinct was right: the capacity should have been better for the headcount. But the problem wasn't that people were underperforming. The problem was that the organization was trying to do too many things at once, and the overhead of managing all that work in progress was consuming the capacity that should have been going to actual delivery.
Finding #3: The CTO on the Front Lines
One of our interviewees described the CTO in a way that immediately resonated: "He's a general on the front lines instead of a general in the back."
The CTO was a technical founder type — deeply capable, deeply involved, and deeply hands-on. He woke up hours before everyone else to get work done. When production outages hit — and they hit in clusters — he was personally involved in resolving them. One interviewee went further: "I think of him as a developer."
This pattern should sound familiar to anyone who's worked with growing technical organizations. The skills that make someone an exceptional technical contributor — the ability to diagnose problems fast, the willingness to jump in and fix things, the satisfaction of solving hard problems personally — become liabilities at scale. When the CTO is fighting fires in production, nobody is doing the CTO's actual job: setting technical direction, building organizational capability, creating the conditions for teams to succeed without heroics.
Every hour the CTO spent personally debugging a production outage was an hour not spent asking why production outages kept happening in clusters, or why the organization felt so slow despite its headcount, or whether the team structure and process were actually serving the business.
The company had outgrown the model where senior leadership could compensate for organizational problems through individual effort. The CTO wasn't failing — he was succeeding at the wrong job.
Finding #4: Testers and Developers in Separate Worlds
The company managed its developers and testers as entirely separate organizations. When a feature was developed, it was handed off to a separate test team. The test team manager told us plainly: the developers probably didn't have enough knowledge to test the features they were building. And the testers, managed independently with their own priorities and schedules, were constantly negotiating with development teams rather than working alongside them.
The result was predictable. Bugs were abundant. Cycle times were inflated by the handoff. Quality problems weren't caught until late in the process. And the test team had become the repository of institutional knowledge about the product — which meant they were simultaneously a bottleneck and the only people who truly understood how everything worked.
This separation also meant that nobody was thinking about the full arc of feature delivery as a single flow. Development had their timeline. Testing had theirs. Regulatory had theirs. Each silo optimized locally, and the overall delivery time ballooned as work queued between groups.
Finding #5: "Scrummerfall" — Waterfall Wearing an Agile Costume
Multiple interviewees used variations of the same word: "scrummerfall," "agilefall." They knew what they were doing. The project management style was plan-driven waterfall — detailed upfront planning, phase-gated handoffs, schedule-driven delivery — wearing just enough agile vocabulary to sound modern.
One person stated it directly: "We decide in a total waterfall format."
And yet, when we'd been brought in, the explicit instruction was: "Not looking for advice on agile and scrum." They wanted help with their tools, not their process. The irony was that the tooling problems they were experiencing — the inability to use backlogs, sprint planning, Kanban boards, and most of Azure DevOps's built-in features — were direct consequences of the process. The tools were designed for a way of working that the company had specifically declined to adopt.
The Azure DevOps environment reflected this tension. They'd built heavily customized process templates on top of the CMMI model — the most heavyweight template available. They had over a hundred team projects. Permissions were, in one manager's words, "a tangle of spaghetti." Nearly 100% of their work management happened through custom work item queries rather than the standard boards and backlogs, because the standard features simply didn't match how they operated.
The tools weren't misconfigured. They were faithfully reflecting an organizational model that was working against the flow of value.
Finding #6: We Weren't Allowed to Talk to the People Doing the Work
We requested interviews with software developers and testers. Those requests were not granted. Every conversation we had was with management — program managers, engineering managers, the test team lead, the quality and regulatory lead, the CTO.
I want to be careful here, because there could be legitimate reasons for this — schedule constraints, regulatory concerns, organizational politics. I don't know why the request was denied, and I don't want to speculate about motives.
But I do want to name what it meant for the assessment: everything we learned was filtered through management's perspective. The people actually writing the code, actually running the tests, actually experiencing the day-to-day friction of the process — we never heard from them. The problems that developers and testers would have raised might have been entirely different from the problems that management raised.
In my experience, the gap between what management thinks is wrong and what the teams think is wrong is often where the most important insights live. When you can only hear one side of that conversation, you're working with an incomplete picture. And the fact that we couldn't access the other side — regardless of the reason — was itself a data point about the organization.
Finding #7: Regulatory Complexity Was Real...But It Wasn't the Bottleneck
The company operated in a heavily regulated environment. Their products required FDA clearance. They had a dedicated quality and regulatory lead whose job was ensuring everything aligned with their quality management system and was audit-proof for both the FDA and the EU.
This regulatory burden was real, and I don't want to minimize it. Medical device software has legitimate constraints that ordinary consumer software doesn't face. Traceability matrices, design history files, formal change control — these aren't bureaucratic theater. They're requirements backed up by force of law.
But during interviews, regulatory complexity was frequently cited as the reason things were slow or couldn't change. It had become a conversation-stopper — a reason why agile approaches "wouldn't work here," why process improvement was constrained, why things had to be done the way they'd always been done.
In my experience, regulatory requirements constrain what you document and how you prove compliance. They don't constrain how you organize teams, how you prioritize work, how you manage WIP, or how you think about delivering value in small increments. Some of the most disciplined Scrum implementations I've seen are in regulated industries — precisely because Scrum's emphasis on transparency, defined processes, and demonstrable completeness aligns naturally with regulatory requirements.
The regulatory environment was a real factor. It wasn't the reason delivery felt so hard. The organizational complexity — too many projects, too much WIP, separated teams, management overhead consuming delivery capacity — would have produced the same symptoms in an unregulated industry.
The Diagnosis
I gave them two honest options.
Option one: if they were satisfied working the way they were working, then their Azure DevOps configuration was fine. They were using it about as well as they could given their process. Some custom tooling could fill the reporting gaps. They could invest in better CI/CD pipelines. They could simplify their process templates. These were incremental improvements that would make the status quo somewhat smoother.
Option two: if they wanted to address the CTO's instinct that their capacity should be better for their headcount — if they wanted to actually change the trajectory — they needed to change how work was organized, not how tools were configured.
The real problems were structural. Too much work in progress, driven by an inability to say "not yet" to anything. Teams that were too small, too fluid, and organized around projects rather than products. A separation between development and testing that injected handoffs, delays, and quality problems into every feature. A management layer that spent more time tracking work than enabling delivery. A CTO whose exceptional technical skills kept pulling him into the engine room when the bridge needed a captain. And a conviction that regulatory constraints prevented process improvement, when in reality the constraints were organizational.
The Azure DevOps question was a symptom, not a cause. The tools were reflecting the organizational model faithfully. Change the model and the tools would suddenly have a lot more to offer.
What I Recommended
Stop asking whether you're using Azure DevOps correctly. Start asking whether the organization is structured to deliver value efficiently. The tools will follow the real-life, human process. Fix that process first.
Minimize work in progress — aggressively. Not everything can be a number-one priority. Within all the work that's happening, there is a priority order. Figure it out, focus on the highest-priority items, and finish them before starting more work. Less WIP means fewer dependencies, less tracking overhead, and more actual delivery.
Adopt Scrum — starting with training and a pilot. Train 30 to 100 people on the framework, then run a pilot project of six months or less. Use what you learn to expand. Scrum's emphasis on delivering done, working software in short increments would directly address the WIP problem, the cycle time problem, and the "definition of done" ambiguity that was causing so much friction.
Put testers and developers on the same teams. Stop managing them as separate organizations. When development and testing plan together and work together, quality problems get caught earlier, cycle times shrink, and the handoff that was inflating every delivery timeline disappears.
Redesign teams around products, not projects. Aim for stable teams of about seven people, each capable of delivering their work without external dependencies. Bring work to teams rather than assembling teams around work. Unless there's an urgent reason, nobody should be on more than one team.
Create a written Definition of Done. Have each team describe everything it takes to call a feature truly done — not partially done, not "done with coding," but done as in ready to release with full confidence. This exercise is often terrifying because it makes transparent just how much work goes into a finished feature. That transparency is the point.
Track production outages and trace root causes. The clusters of outages that were pulling the CTO into firefighting mode weren't random. There were patterns. Find them. Ask whether anything in the Definition of Done would have prevented each incident. Ask whether automation — tests, deployment, rollback — could have caught the problem earlier.
Separate the regulatory compliance workflow from the development workflow. The two streams were tangled in ways that made both harder to manage. Development work and compliance documentation could run in parallel with cleaner interfaces between them, reducing the perception that regulation was the bottleneck for everything.
Recreate the Azure DevOps team projects. If the organization moves toward Scrum, the current structure — a hundred projects, heavily customized CMMI templates, spaghetti permissions — would need to be rebuilt from scratch with simpler templates and a structure that matches the new way of working.
The Lesson
This company was asking a question about their tools because a question about their tools felt answerable. "Are we using Azure DevOps correctly?" has a technical answer. You can look at the configuration, compare it to best practices, and produce a checklist of improvements. It's concrete and actionable and doesn't require anyone to change how they work.
The real question — "why does everything feel so hard for the number of people we have?" — doesn't have a technical answer. It requires looking at how teams are structured, how work is prioritized (or not), how many things are in progress simultaneously, and whether the organization's management model is helping or hindering the flow of value to customers. Those are uncomfortable questions because they implicate decisions that real people made, organizational structures that real people depend on, and habits that the entire company has internalized.
This is the pattern I see in growing companies again and again. The informal processes that worked when the company was small — the founder who could fix anything, the fluid team assignments, the "everyone pitches in" mentality — stop working as the headcount grows. More people should mean more capacity, but instead it means more coordination overhead, more dependencies, more work in progress, and more time spent tracking work rather than doing it. The CTO's instinct was exactly right: the capacity should have been better. It wasn't a people problem. It was a structural problem.
The most telling moment of the engagement was the instruction we received at the kickoff: "Not looking for advice on agile and scrum." They wanted tool optimization within their existing process. What they needed was the process change that would make tool optimization unnecessary. The features of Azure DevOps they couldn't use — backlogs, sprint planning, Kanban boards, flow analytics — were all designed for a way of working they had explicitly declined to consider.
Sometimes the most important finding isn't something you discover. It's the question the client didn't want you to ask.
If your teams are growing but your delivery speed isn't — if tracking work has become a full-time job — if your best technical leaders are fighting fires instead of setting direction — if you're asking whether your tools are configured correctly when the real question is whether your organization is structured to deliver — you might not have a tooling problem. You might have an organizational problem wearing a tooling costume.
That's the kind of problem I help companies see clearly.