Consulting
PeopleAnalyst is a consulting practice founded in 2012 around a specific bet: that most organizations cannot do people analytics — not for lack of data, but for lack of methodology, decision-discipline, and a function set up to deliver judgment instead of dashboards. The work below is what bridging that gap looks like in practice.
The shape of an engagement varies. Sometimes it is a discovery sprint that produces a roadmap. Sometimes it is a multi-month build that stands up a function from zero. Sometimes it is a quiet quarterly partnership that lets an in-house team punch above its weight on the decisions that actually matter. The menu below is organized by what the work does, not which HR domain it sits in — because the methodology underneath transfers across domains, and most live engagements span several.
If something here looks like the shape of a problem you have, send a note.
The headings give the menu; expand any section for methodology and case-grounding.
What the work does
1. Decision support under uncertainty
The headline question executives bring to a planning cycle is rarely what is the answer — it is what is the range of plausible answers, and how confident can I be in committing to a parameter before more is known. This is the consulting work for compensation programs, workforce-planning cycles, capacity decisions, and any other forced-choice point where the population, the inputs, or the future are themselves uncertain.
Methodology · case
The methodology spans Monte Carlo simulation, formal Value-of-Information analysis (EVPI / EVSI), Kepner-Tregoe structured decision analysis, and the Rapid Collaborative Insight (RCI) framework — collaboration combined with disciplined analysis to converge faster than either alone. The deliverables are usually models executives can interrogate live (not slide decks), regression-based surrogate calculators that distill the simulation into instant what-ifs, and a written record of the decisions made, the alternatives considered, and the assumptions that would invalidate the recommendation.
Recent example: 2025 Annual Incentive Plan and RSU compensation cycle for The New York Times — Monte Carlo modeling at population scale, regression surrogate calculators, and a cross-program reallocation recommendation grounded in distribution-of-outcome analysis rather than a point estimate.
2. People analytics function build
Standing up a people analytics function from zero — or rebuilding one that has stalled — is a five-year arc compressed into the first eighteen months of getting the foundations right: charter, governance, vendor selection, headcount profile, the load-bearing measurement set, and the working relationship with HRBPs and finance that makes everything downstream possible. Most functions fail not because the analyst is wrong but because the function was set up to deliver reports rather than judgment, against a stakeholder set that was never aligned on what the function was for.
Methodology · provenance
This is the work that originally built the People Analytics functions at Merck (65,000 employees, $4B HR investment), PetSmart (20,000, $40M+ investment), and Google (through 7K → 21K growth). It draws on the Three A's framework (Attraction, Activation, Attrition as the lifecycle measurement spine), the Lean People Analytics methodology for resource-constrained environments, and the Full-Stack People Analytics Systems view that treats the function as a software stack rather than a reporting team.
Engagement shapes range from a six-week function-design sprint to a twelve-month embedded build.
3. Workforce platform selection and implementation
Most enterprise people-analytics platforms — Visier, One Model, ZeroedIn, Workday Adaptive, Tableau-native builds, Power BI custom dashboards — solve some problems and create others. The selection decision is usually framed wrong: as which vendor rather than which architecture, against which decision-support requirements, with which integration-debt cost. The implementation is usually scoped wrong: as configure the tool rather than build the canonical metric definitions, the data contracts, and the governance the tool will run on top of.
Methodology · case
This is the consulting work for vendor evaluation, RFP design, technical due diligence on platform claims, and the architectural and data-contract layer that determines whether the implementation produces a reporting layer that compounds value or a reporting layer that becomes the next thing to rip out. Mike is the first person to have worked on three leading workforce-analytics data platforms — Visier, One Model, and ZeroedIn — across multiple stints; the comparative perspective is the asset.
Example: Workday SaaS HRIS implementation at Otsuka Pharmaceutical — three legacy applications consolidated to one in under ten months, $6M monthly payroll live with zero errors, geographic footprint scaled 10× with constant support headcount.
4. Custom analytics architecture
When the off-the-shelf vendors do not fit the question, the work is bespoke. This is the consulting for organizations that have already accepted that their analytics needs cannot be served by a configured product — usually because the underlying decision is too specific (e.g., research-grade adaptive measurement; a custom compensation-decision OS; a competency-based talent system; an organizational forecasting layer the existing HRIS does not surface).
Technical foundation
The technical foundation is the AI-native People Analytics Platform — twenty production applications shipped solo since 2022 on a hub-and-spoke architecture with shared schemas and APIs. Component capabilities ready to be brought into a custom build include adaptive item selection (the Reincarnation engine), persistent batch metric calculation at 200+ definitions (Calculus), cross-domain corpus ingestion to typed records (Meta-Factory), formal VOI calculation, deterministic PII anonymization, and metadata-grounded SQL/Python generation (Conductor). Bespoke applications can be assembled from these in weeks rather than years.
5. Compensation strategy and modeling
Compensation work that goes beyond the market-pricing-and-benchmarks layer most consulting houses stop at — into the structural questions that determine whether the program actually does what the company wants it to do. AIP design, RSU allocation strategy, equity-band rationalization, M&A pay reconciliation across geographies, executive compensation modeling, sales-incentive design with population-scale simulation.
Cases
Recent examples: NYT 2025 AIP/RSU cycle (above); Juniper Networks M&A pay structure reconciliation across 45+ countries.
6. Workforce planning and forecasting
Predictive modeling for attrition, retention, hiring funnel throughput, capability supply, and headcount cost — at the level of fidelity that lets executives make actual capacity and investment decisions rather than budget-setting rituals. This is the consulting for organizations whose headcount or skill mix is changing fast enough that the trailing-data report is useless and the forward-looking model is load-bearing.
Methodology · provenance
Mike designed Google's first attrition prediction model during the 7K → 21K growth era; the methodology has continued to deepen since. Forecasting, scenario planning, and survival-analysis work on the spine of the Net Activated Value (NAV) framework — the unifying leadership metric that ties quarterly human-capital measurement to dollar outcomes.
7. Survey strategy and listening architecture
Engagement surveys, pulse programs, lifecycle measurement, exit and onboarding instruments, ad-hoc diagnostic surveys, item-bank governance for organizations running multiple surveys against overlapping populations. The consulting includes survey design, item validation, sampling strategy, the analysis pipeline, and the reporting cadence that makes survey output actually inform decisions instead of producing a yearly slide deck nobody reads.
Methodological frontier · case
Example: designed Google's first professional global employee survey — the precursor to what later became Googlegeist.
The methodological frontier is adaptive item selection with cross-study item-response accumulation — the RID/SID architecture that lets a single question participate in many surveys without confounding, so the evidence pool compounds across studies rather than siloing per program. This is research-grade machinery rarely seen outside academic measurement; the consulting offering brings it to organizations that need it.
8. Diagnostic research
The high-stakes question that does not fit any standing program — why is attrition concentrated where it is, and what would change it?; which jobs in our store footprint disproportionately predict revenue, and what would happen if we redirected investment toward them?; did this training program work, and what would the counterfactual have shown?. This is the consulting for organizations that need a specific answer to a specific question, treated as a research problem rather than a reporting request.
Engagement shape · cases
Diagnostic engagements typically run six to twelve weeks. Deliverables are written: the question framed, the analysis run, the alternatives considered, the answer with confidence intervals, the recommended actions, and the second-order questions the answer surfaces.
Example: at PetSmart, identified the store-associate jobs disproportionately critical to revenue; redirected performance management, compensation, and training investments accordingly. At AstraZeneca, proved statistically that pulling salespeople out of the field for sales-data training produced better subsequent performance — leading to program expansion.
9. AI-native augmentation
For organizations whose people-analytics function exists, runs, and is reasonably mature — but is now confronting the question of how AI changes the work. This is the consulting for layering retrieval-augmented synthesis over an existing case archive, building decision-support agents on top of existing dashboards, ingesting unstructured corpora (interview transcripts, exit notes, performance reviews, engagement comments) into typed records the rest of the function can use, and the architectural decisions that determine whether AI augmentation compounds value or produces a new layer of unverified content.
Technical foundation
The technical foundation is the same as the custom-build practice (above): production-grade RAG pipelines, multi-model orchestration with budget and quality trade-offs, structured-output prompt engineering, and the discipline of treating LLM outputs as evidence to be verified, not answers to be trusted. The consulting includes the architectural design, the build, the verification protocols, and the governance for an AI-augmented function that can defend its outputs to a sceptical executive.
10. Methodology coaching and advisory
Periodic engagement for in-house analytics leaders who want a sounding board, methodology coaching, or a quarterly outside perspective on the function's trajectory. This is the consulting for organizations that do not need a build or a discovery sprint — they need a partner who has seen the failure modes the function is about to walk into.
What gets coached · engagement shape
The methodologies most often coached: Rapid Collaborative Impact (RCI), Net Activated Value, the Three A's framework, Lean People Analytics, the Quantitative Model of Human Resources, and the Principal-Issues Thesis — the methodological spine that names load-bearing measurement sets across domains and explains why most people-analytics functions are stuck. Engagements usually run as a recurring quarterly cadence or as on-call advisory across a defined period.
For the long version of the argument underneath this work, see Why People Analytics Is Stuck — and How to Unstick It — the RCI essay that names the framework, the four-S synthesis, and why most organizations cannot do people analytics even though it has been demonstrably valuable for two decades.
Engagement shapes
| Shape | Duration | Fit |
|---|---|---|
| Discovery sprint | 2–4 weeks | Frame an opportunity, scope the work, deliver a written roadmap. Often the first phase of a longer engagement. |
| Build engagement | 3–12 months | Stand up a function, build a custom application, deliver a planning-cycle outcome. Embedded or remote. |
| Embedded analyst | 3–9 months, fractional | A defined number of hours per week against a named program — often a planning cycle or a function in transition. |
| Decision-support partnership | Recurring quarterly | Anchored to the planning cadence — comp cycles, workforce planning, executive committee. Repeat work compounds context. |
| Advisory | Periodic | On-call methodology coaching for in-house analytics leaders. Quarterly check-ins or as-needed consultations. |
| Custom platform build | 6–18 months | Bespoke application work on the People Analytics Platform substrate. Spec, build, hand off — or build, run, and iterate. |
Pricing varies by engagement shape and scope; discovery sprints are fixed-fee, longer engagements are usually a blend of fixed-fee phases and hourly. Pro-bono and reduced-fee engagements are occasionally available for early-stage organizations or research-grade work — ask.
Selected past clients
A partial list, drawn from twelve years of practice: The New York Times · Mars · Pfizer · Juniper Networks · Zoom · Reddit · Instabase · Articulate · Nike · Pure Storage · Cityblock Health · 10X Genomics · Atlassian · Udemy.
Earlier in-house: Merck · PetSmart · Google · Otsuka Pharmaceutical · AstraZeneca.
References available on request.
What's distinctive
A few things to know about how this practice operates, beyond what the menu above captures:
- Solo-founder cadence with platform leverage. PeopleAnalyst is a one-person practice operating on a twenty-application proprietary platform. The implication: small engagements get the principal, not a junior; large engagements get tooling that would normally take a vendor team six months to assemble.
- Methodology-first, not deliverable-first. Most consulting engagements begin with a deliverable spec; this one begins with the methodology that will produce the deliverable. The methodology is what the client ends up keeping after the engagement closes.
- Written, not slide-shaped. Most outputs are written documents — protocols, decision records, technical specifications, hand-off documentation — because slides decay and writing accumulates.
- Open about what the work is not. Some categories of HR consulting are not on this menu. There is no executive-search practice, no DEI training delivery, no organizational-change-management facilitation, no comp-survey resale, no tool reseller relationship. The practice is analytics and decision-support work, and it stays that.
How to engage
The fastest path is a short note: a paragraph on the situation, a sentence on the timeline, and what success looks like if it works. mike@peopleanalyst.com.
A first conversation is usually thirty to forty-five minutes, unbilled, to determine whether the shape of the engagement makes sense before committing time on either side.