peopleanalyst

Research · all properties

The research underneath the products.

Each property has its own research program. They share a baseline: an overview, a methodology, the actual reports, four audience-tier framings of the headline work, a bibliography, preregistrations, and a pipeline. Where a slot is empty, it is shown openly as forthcoming — visible gaps, not papered-over ones.

The five programs share a posture. Each treats a domain that looks like a product — figurative art, baby naming, fantasy decisions, HR analytics, multi-agent coordination — and turns it into an instrument the discipline did not have. Vela separates desire from preference. Namesake studies cultural diffusion through one of its denser corpora. Fourth & Two formalizes decisions under uncertainty. The People Analytics Platform names the load-bearing measurements most organizations cannot do. DevPlane catches the coordination cost that agent-side productivity metrics miss. The methods travel because the questions underneath are general — measurement under conditions where the cleanest evidence is unevenly distributed.

Vela

41 of 46 slots populated

A contemplative platform for fine-art figurative work. Research probes desire dimensions, compositional features, temporal dynamics, and individual differences — with a deliberately rigorous bibliography and preregistered protocols.

Why this matters

On its surface, Vela's research is figurative-art response. Underneath, it is an instrument: how does desire — move-toward — separate from preference (like)? How do compositional features mediate response? How stable are individual differences? The methods generalize. They speak to consumer-behavior research, aesthetic measurement methodology, taste calibration in any high-volume domain, and the design of adaptive measurement instruments well outside HR. The corpus is fine art; the questions are general.

Namesake

21 of 21 slots populated

Intentional baby naming. Research is an empirical investigation of cultural diffusion: how names break out, what predicts spread, and where the predictability ceiling lives.

Why this matters

Naming is the obvious-seeming domain. The research underneath it is cultural diffusion — how cultural objects spread, what predicts breakouts, and where the predictability ceiling lives. Findings extrapolate beyond names: how marketing campaigns succeed or fail, how misinformation propagates, how innovations diffuse through organizations, why fashion cycles look the way they do, what separates lasting public discourse from brief virality. Names are the testbed because the corpus is dense and the temporal signal is clean. The implications travel.

Fourth & Two

0 of 11 slots populated

Fantasy football intelligence. Research arc forthcoming — likely decisions-under-uncertainty in fantasy and off-season game design as revenue innovation.

Read first

General-audience explainer forthcoming.

→ full research surface

Why this matters

Forthcoming. Anticipated frame: decisions under uncertainty in fantasy extrapolate to executive compensation modeling, medical-decision support, capital allocation, and public-policy tradeoffs — any domain where Monte Carlo plus structured information design beats single-point estimates. The off-season game design thread is itself a study in how to extend a niche industry's revenue cycle.

People Analytics Platform

0 of 12 slots populated

Hub-and-spoke ecosystem for AI-native HR analytics. Research arc forthcoming — likely the principal-issues thesis, hub-and-spoke as moat, and RID/SID adaptive measurement.

Read first

General-audience explainer forthcoming.

→ full research surface

Why this matters

The principal-issues thesis is the spine. It says every domain has a load-bearing measurement set, and most domains are stuck because they have not named it. People analytics is the demonstration; the same logic applies to any field where rigorous measurement is unevenly distributed across organizations. The platform is built to make load-bearing-set delivery executable at solo cadence — which is the operating-system claim underneath every other portfolio item.

DevPlane

6 of 10 slots populated

A cockpit for multi-tool software development. Research is an empirical program on coordination cost in heterogeneous AI tool ecosystems — using DevPlane's continuous production telemetry as the apparatus, not the subject. Lead study: a pre-registered field test of risk compensation in human-AI coordination.

Why this matters

The productivity claims being made for AI coding tools are largely grounded in agent-side measurements — lines produced, tasks completed, time-to-PR. If the Ironies of Automation (Bainbridge 1983) are operative — operator vigilance falling as agent reliability rises — those measurements systematically overstate net effect. The DevPlane research program tests that prediction with continuous production telemetry on a real operator running real agents on a real, multi-month codebase. The methodology generalizes: any team running heterogeneous tools through a coordination layer (multi-tool ops dashboards, hospital handoff systems, distributed scientific instruments) shares the same shape of problem.

Principia

6 of 14 slots populated

A source-graded survey of organizational measurement — predominantly the people side. Codifies constructs, instruments, items, measures, and meta-analytic effect-size tables into a queryable registry shared across the People Analytics Platform.

Why this matters

Load-bearing organizational measurement is unevenly distributed across organizations and disciplines. The same construct gets measured five different ways across five different studies; effect-size tables live scattered through chapters of textbooks; high-quality instruments get reinvented in low-quality form because the original is paywalled or buried. Principia exists to give builders, researchers, and operators a single graded, sourced, queryable place to look — and to give the People Analytics Platform a canonical measurement vocabulary it can subscribe to. The methodology generalizes: source grading, statistical-metadata extraction into a shared schema, novelty verification before publication, queryable indexing — the same shape works for clinical psychology, educational measurement, marketing research, or any field where rigorous measurement is unevenly distributed.