Overview
What this research program is and why it exists. The frame the rest of the work hangs on.
research / principia
A source-graded survey of organizational measurement — predominantly the people side. Codifies constructs, instruments, items, measures, and meta-analytic effect-size tables into a queryable registry shared across the People Analytics Platform.
Why this matters
The portable claim — what this research lets you understand outside the surface domain.
Load-bearing organizational measurement is unevenly distributed across organizations and disciplines. The same construct gets measured five different ways across five different studies; effect-size tables live scattered through chapters of textbooks; high-quality instruments get reinvented in low-quality form because the original is paywalled or buried. Principia exists to give builders, researchers, and operators a single graded, sourced, queryable place to look — and to give the People Analytics Platform a canonical measurement vocabulary it can subscribe to. The methodology generalizes: source grading, statistical-metadata extraction into a shared schema, novelty verification before publication, queryable indexing — the same shape works for clinical psychology, educational measurement, marketing research, or any field where rigorous measurement is unevenly distributed.
Read first
The general-audience explainer is the entry point. Everything below is the drill-down.
A source-graded survey of organizational measurement, output as a book + queryable database, rendered from one underlying registry. Outside-reader brief on what it codifies, why now, and how it relates to the rest of the portfolio via @measurement/core.
Read →Drill-down — full research surface
Seven-slot baseline. Forthcoming slots shown openly.
Overview
What this research program is and why it exists. The frame the rest of the work hangs on.
Methodology
How the work is done — instruments, protocols, the standards each report inherits.
Reports
The actual research findings — phased results, research-question briefs, applied analyses.
17 construct families across 4 tiers (foundational · derivative · composite · outcomes), 8 queued for sequencing, 3 parallel threads (surveys · infrastructure · book draft).
First tier-1 family. Densest accumulated literature; serves as the methodology proof-of-method. UWES, Gallup Q12, MEI, JES, plus the Kahn-tradition qualitative work. Blocked on @measurement/core extraction (vela ASN-1013).
Tier-1, second in the queue. Long history; classic measurement-model debates (global vs facet; affective vs cognitive). JDI, MSQ, JSS, Brief Index of Affective Job Satisfaction.
Tier-1, third. Tripartite measurement model (Allen & Meyer affective / continuance / normative) is canonical and well-tested.
Audience tiers
The same headline research surfaced four ways: peer-review, engineering, general audience, product.
Public framing of the survey-as-instrument argument — what builders, practitioners, and researchers can do with a queryable, source-graded measurement registry that they could not do before.
Positioning against the existing measurement-handbook tradition (Schmitt & Highhouse 2013; Borman et al. 2003) and the meta-analytic synthesis tradition (Hunter & Schmidt 2004; Cooper 2017). What Principia adds, what it does not claim to add.
Engineering reviewer's lens — schema discipline, the ETL pipeline, the verification-log infrastructure, the hub-and-spoke @measurement/core story. How the registry is built and where it can fail.
What the registry tells us to build next — on the People Analytics Platform, on Vela, in the toolbox/hub. Which constructs unlock which platform features once their rows exist.
Bibliography
Field positioning — formal references and literature maps grounding the research threads.
Three-layer field orientation — measurement theory, construct-specific instrumentation, meta-analytic accumulations. Where Principia sits between layers 2 and 3 as the indexing layer.
28 foundational references — measurement theory, scale construction, meta-analytic methodology, tier-1 construct instrument-development papers, tier-2 derivative-construct anchors. Construct-specific bibliographies live inside each survey document; this file holds the cross-cutting methodology references.
Preregistrations & protocols
Studies and intervention protocols filed before execution.
Synthesis-analytic preregistrations land here as construct-family surveys surface meta-analytic gaps. None filed yet — first will land alongside the engagement survey if a meta-analytic gap appears.
Pipeline
What is running, what is queued, what is forthcoming.