March 1, 2026
5 OKR Best Practices for High-Performing Teams
It’s the start of Q2, and your leadership team is gathered in a conference room. Last quarter’s OKR review is on the screen, and the numbers tell a familiar story: 14 objectives across the organization, most scored between 0.3 and 0.5, and nobody in the room can explain exactly why. The head of engineering mutters something about "moving the goalposts." The VP of Sales points out that her team hit their revenue number anyway, so what was the point? The CEO, who championed the OKR rollout six months ago after reading Measure What Matters, stares at the spreadsheet and wonders where it all went wrong.
If this scene feels uncomfortably familiar, you’re not alone. Research from the OKR coaching firm Quantive (formerly Gtmhub) suggests that roughly 70% of organizations fail to see meaningful improvement from OKRs in their first year. Not because the framework is flawed — Google, Intel, LinkedIn, and Spotify have proven otherwise — but because implementation matters far more than the framework itself. OKRs are deceptively simple to understand and genuinely difficult to practice well.
I’ve spent the last eight years helping product and engineering teams adopt goal-setting frameworks, and I’ve watched the same five mistakes repeat across startups, scale-ups, and enterprise organizations alike. The good news? Each mistake has a corresponding practice that, once internalized, transforms OKRs from a bureaucratic checkbox into the strategic operating system they were designed to be.
Practice 1: Write Fewer OKRs Than You Think You Need
The single most common OKR failure mode is overload. A team of eight engineers sits down for their quarterly planning session, and by the end of the day, they’ve produced seven objectives with four key results each. Twenty-eight measurable commitments for twelve weeks of work. The math alone should give you pause — that’s more than two key results to move per week, on top of the daily operational work that doesn’t stop just because you adopted a new framework.
The root cause is psychological. Writing an OKR feels like committing to importance. Not writing one feels like admitting something doesn’t matter. And in most organizations, nobody wants to be the person who says, "Actually, that initiative the CEO mentioned in the all-hands? It’s not in our OKRs this quarter." So teams hedge. They write OKRs for everything, which is functionally identical to writing OKRs for nothing.
The Discipline of Three
Andy Grove, who pioneered OKRs at Intel in the 1970s, was famous for forcing his direct reports to choose. Not five priorities. Not four. Three. His logic was ruthlessly simple: if a leader cannot articulate the three things that matter most this quarter, they don’t understand their business well enough.
Modern practitioners have softened this slightly — most OKR coaches recommend 3 to 5 objectives per team, with 2 to 4 key results per objective. But the principle remains. Fewer OKRs create clarity. They force the difficult conversations about what you’re not going to do, which are far more valuable than the easy conversations about what you’d like to do.
Here’s a practical test: after your team drafts their OKRs, ask each team member to recite them from memory. If they can’t, you have too many. OKRs that can’t be remembered can’t guide daily decisions, and OKRs that don’t guide daily decisions are just administrative overhead.
What About All the Other Work?
A common objection: "But we have more than three important things happening this quarter." Of course you do. OKRs aren’t meant to capture all the work — they capture the work that requires focused change. Business-as-usual operations, maintenance work, and ongoing commitments don’t need OKRs. They need good project management. Conflating the two is how teams end up with OKRs like "Maintain 99.9% uptime" — which is an SLA, not an objective — sitting alongside "Launch the new onboarding flow," which is an actual strategic bet. The former belongs in your operational dashboard. The latter belongs in your OKRs.
Practice 2: Make Key Results Pass the Stranger Test
If objectives are the "what," key results are the "how we’ll know." And this is where most teams quietly sabotage themselves, often without realizing it. The problem isn’t that teams write unmeasurable key results — most people have internalized that lesson by now. The problem is that they write measurable but meaningless ones.
Consider this key result: "Increase user engagement by 20%." It’s measurable. It’s time-bound (implicitly, by the quarter). And it’s almost useless. What does "engagement" mean? Page views? Session duration? Feature adoption? The ambiguity isn’t just an academic concern — it creates real organizational dysfunction. The product team optimizes for daily active users. The design team optimizes for session duration. The data team reports both numbers, and leadership sees improvement in one metric and decline in another, and nobody can agree on whether the key result was achieved.
The Stranger Test
I use a simple heuristic I call the Stranger Test: could a reasonably intelligent stranger, with no context about your business, look at your key result at the end of the quarter and definitively say whether it was achieved? No interpretation needed. No "well, it depends on how you measure it." Just a clear yes or no, or a clear number on a clear scale.
Here’s what that looks like in practice:
Weak key result: "Improve customer onboarding experience."
Better: "Reduce median time-to-first-value from 4.2 days to 2.0 days, as measured by the interval between account creation and first completed workflow."
Weak key result: "Launch new reporting feature."
Better: "Ship custom report builder to 100% of Enterprise tier accounts, achieving 30% adoption within 6 weeks of launch."
Notice the difference. The stronger versions specify the metric, the current baseline, the target, and the measurement method. They leave no room for debate at the end of the quarter. They also reveal something important: the real work of writing good key results is the work of understanding your current state. You can’t write "reduce churn from X to Y" if you don’t know what X is. That discovery process — digging into the data, agreeing on definitions, establishing baselines — is often more valuable than the OKR itself.
Leading vs. Lagging Indicators
One more subtlety worth noting: the best key results blend leading and lagging indicators. A lagging indicator like "achieve $2M in Q3 revenue" tells you whether you won, but by the time you’re measuring it, it’s too late to change course. A leading indicator like "generate 500 qualified pipeline opportunities by mid-quarter" gives you a signal early enough to adjust. Mature OKR practitioners pair both, using leading indicators for weekly check-ins and lagging indicators for quarterly scoring.
Practice 3: Check In Weekly, Not Quarterly
Here’s a thought experiment. Imagine you’re training for a marathon, and your coach says: "Great, I’ll see you on race day. Let me know how it goes." No mid-training check-ins. No pace adjustments. No feedback on your form or nutrition. You’d fire that coach immediately. And yet this is exactly how most organizations run their OKR programs — goals are set in January and reviewed in March, with twelve weeks of radio silence in between.
The quarterly review is necessary, but it is not sufficient. By the time you discover in week 12 that a key result is off track, you’ve lost the ability to do anything about it. The insight arrives too late to be actionable, which means the review becomes a post-mortem rather than a course correction. And post-mortems, however well-intentioned, tend to produce blame rather than learning.
The 15-Minute Weekly Check-In
The fix is almost embarrassingly simple: a 15-minute weekly check-in per team, focused exclusively on OKR progress. Not a status meeting. Not a project update. A focused conversation structured around three questions:
1. What’s the current confidence level for each key result? Use a simple traffic light — green (on track), yellow (at risk), red (off track). Force the team to commit to a color. "It’s complicated" is not a color.
2. What changed since last week? Not what was done — what changed. Did a metric move? Did a risk materialize? Did an assumption prove wrong? This question surfaces information that traditional status updates miss.
3. What’s the one thing that would most accelerate progress this week? This focuses energy. It prevents the meeting from becoming a laundry list of tasks and redirects attention to the highest-leverage action.
Fifteen minutes. Three questions. Every week. Teams that adopt this cadence consistently report that quarter-end reviews become confirmations of what everyone already knows, rather than uncomfortable surprises. The drama disappears because the information flows continuously.
CFR: The Operating System for Check-Ins
John Doerr, who brought OKRs from Intel to Google, introduced a companion framework called CFR — Conversations, Feedback, Recognition. The premise is that OKRs tell you what to focus on, but CFR tells you how to talk about it. Conversations are the weekly check-ins. Feedback is the bidirectional input that helps people improve (not annual reviews, but real-time, specific, actionable input). Recognition is the acknowledgment of contributions, ideally peer-to-peer rather than top-down.
CFR solves a problem that many OKR implementations create inadvertently: the scoreboard without a coach problem. Teams track their numbers diligently but never have the conversations that turn data into insight and insight into action. If your OKR program has dashboards but not regular human conversations about what those dashboards mean, you’re measuring without managing. Tools like ILPapps integrate CFR directly alongside OKR tracking for exactly this reason — the goal and the conversation about the goal belong together, not in separate systems.
Practice 4: Cascade Alignment, Not Control
Of all the OKR practices that organizations get wrong, cascading might be the most consequential. The traditional approach looks like this: the CEO sets company-level OKRs. Each VP takes one of those objectives and creates sub-objectives for their department. Each director does the same. Each team lead does the same. By the time the cascade reaches the individual contributor, they have an OKR that is four levels removed from the original intent, shaped entirely by the interpretation of each manager in the chain.
This is not alignment. This is a game of telephone with strategic intent.
The Problem with Pure Top-Down
Pure top-down cascading creates three predictable failures. First, it strips autonomy from the teams closest to the work. The engineering team that talks to customers every day and knows exactly which technical debt is slowing them down is told to work on something else because it "cascades" from a company objective they had no voice in shaping. Second, it creates artificial alignment — every team’s OKRs technically connect to the company strategy, but the connections are often superficial and the actual work doesn’t move the needle on the higher-level goal. Third, it’s slow. Waiting for the CEO to finalize company OKRs before departments can plan, before teams can plan, before individuals can plan means the first three weeks of every quarter are consumed by planning rather than execution.
The 60/40 Rule
High-performing OKR organizations use a different model. Roughly 60% of OKRs originate bottom-up, from the teams that understand the day-to-day reality of the work. The remaining 40% are top-down, reflecting strategic priorities that require cross-functional coordination or represent bets that leadership is making based on market conditions the teams may not see.
This isn’t anarchy — it’s structured autonomy. The company sets 2-3 high-level objectives that provide strategic context. Teams then draft their own OKRs, informed by that context but not dictated by it. The alignment conversation happens when teams present their OKRs to each other and to leadership, identifying dependencies, gaps, and conflicts. The goal is not that every team’s OKR traces to a company OKR in a neat hierarchy. The goal is that every team can articulate how their work contributes to the company’s strategic direction, even if the connection is indirect.
This is where visual strategy tools earn their keep. When a product team can see how their Q3 objectives connect to the company’s annual priorities — and where they diverge — the alignment conversation becomes concrete rather than abstract. ILPapps’ Strategy Board, for example, was designed specifically for this kind of cross-team visibility, letting teams map their OKRs against company-level strategy without forcing a rigid parent-child hierarchy.
Cross-Functional OKRs
One underused technique for alignment without control: shared OKRs across teams. Instead of the product team having their onboarding OKR and the engineering team having their infrastructure OKR, both teams co-own a single objective: "Deliver a world-class first-week experience for new users." Each team contributes different key results, but the shared objective forces collaboration and prevents the silo optimization that plagues most organizations. If two teams share an objective, they’ll find a way to work together. If they have separate objectives that happen to be related, they’ll optimize independently and hope for the best.
Practice 5: Separate OKRs from Compensation
This is the practice that generates the most resistance from HR leaders and executives, and it’s also the one with the most evidence behind it. Tying OKR scores directly to bonuses, promotions, or performance ratings fundamentally undermines the purpose of the framework.
The logic seems intuitive at first: if you want people to take OKRs seriously, attach financial consequences. But the second-order effects are devastating. When my bonus depends on my OKR score, I am economically incentivized to set conservative targets that I’m confident I can hit. This is called sandbagging, and it is the silent killer of ambitious goal-setting programs.
The Sandbagging Death Spiral
Here’s how it typically plays out. In Q1, a team sets an aggressive target: grow monthly active users from 10,000 to 25,000. They reach 18,000 — a genuinely impressive 80% increase — but score only 0.72 on their key result. Bonuses are reduced. The team feels punished for being ambitious. In Q2, they set a "safer" target: grow from 18,000 to 22,000. They hit 23,500 and score 1.0. Bonuses are paid in full. Everyone is happy — except the company just left significant growth on the table because the team learned that ambition is punished and conservatism is rewarded.
Google understood this from the beginning. Their OKR philosophy explicitly states that a score of 0.6 to 0.7 is the ideal outcome — it means the target was ambitious enough to be a genuine stretch. Scoring 1.0 consistently means you’re sandbagging. But this philosophy is only possible when scores are decoupled from compensation. You can’t ask people to aim for 70% achievement and then pay them based on their score. The incentives are contradictory.
What to Use Instead
If not OKR scores, then what should inform compensation and promotion decisions? The answer is a more holistic evaluation that considers multiple inputs:
Contribution quality: Not what score did you achieve, but what impact did your work have? A team that scores 0.6 on an ambitious objective that moves the company forward is contributing more than a team that scores 1.0 on a trivial one.
Peer recognition: Who do colleagues point to as having made a difference? Peer feedback, when collected systematically, surfaces contributions that manager-only reviews miss. The engineer who unblocked three other teams by fixing a shared dependency might not have an OKR for it, but her peers know what she did.
Growth and learning: Did the person develop new capabilities? Did they take on challenges outside their comfort zone? This is especially important for retaining high performers, who are often the most ambitious OKR setters and therefore the most penalized under a score-based compensation model.
CFR data over time: The conversations, feedback, and recognition accumulated throughout the quarter provide a rich, nuanced picture of someone’s performance that a single OKR score cannot capture. This is one reason progressive organizations are investing in continuous performance management — the data generated by regular check-ins and peer recognition is simply better than an annual review for making fair compensation decisions.
The Compound Effect: Culture Over Compliance
Each of these five practices — fewer OKRs, measurable key results, weekly check-ins, bidirectional alignment, decoupled compensation — is valuable on its own. But their real power is multiplicative. Teams that write fewer OKRs have time for meaningful weekly check-ins. Teams that check in weekly catch misalignment early. Teams that aren’t punished for missing ambitious targets actually set ambitious targets. Teams that contribute to their own OKRs feel ownership over them. Each practice reinforces the others, creating a flywheel that, over two or three quarters, transforms OKRs from a management exercise into a genuine operating rhythm.
I’ve watched this transformation happen dozens of times, and it always follows the same arc. The first quarter is messy. Teams resist the weekly cadence. Executives are uncomfortable with bottom-up goal-setting. HR pushes back on decoupling compensation. The second quarter is better — the cadence becomes habit, the alignment conversations get more productive, and one or two teams have a genuine breakthrough enabled by focus and ambition. By the third quarter, something shifts. Teams start asking for their OKR planning sessions. Managers reference OKRs in daily decisions. Cross-functional conflicts get resolved by pointing at shared objectives instead of escalating to leadership.
That’s the compound effect. It’s not about hitting a number on a scorecard. It’s about building a culture where every person on every team can answer two questions at any moment: "What matters most right now?" and "How will we know if we’re making progress?" When those answers are clear, shared, and regularly revisited, performance follows. Not because people are being watched or measured, but because they understand how their work connects to something bigger than their task list.
The framework is simple. The practices are specific. The hard part, as always, is the discipline to do them consistently, quarter after quarter, until they stop being practices and start being the way your team works. That’s when OKRs stop being a framework you adopted and become a competitive advantage you built.
Did you enjoy reading this blog? Share it
Ready to find out more?