peopleanalyst

research / principia

Principia research

A source-graded survey of organizational measurement — predominantly the people side. Codifies constructs, instruments, items, measures, and meta-analytic effect-size tables into a queryable registry shared across the People Analytics Platform.

Why this matters

The portable claim — what this research lets you understand outside the surface domain.

Load-bearing organizational measurement is unevenly distributed across organizations and disciplines. The same construct gets measured five different ways across five different studies; effect-size tables live scattered through chapters of textbooks; high-quality instruments get reinvented in low-quality form because the original is paywalled or buried. Principia exists to give builders, researchers, and operators a single graded, sourced, queryable place to look — and to give the People Analytics Platform a canonical measurement vocabulary it can subscribe to. The methodology generalizes: source grading, statistical-metadata extraction into a shared schema, novelty verification before publication, queryable indexing — the same shape works for clinical psychology, educational measurement, marketing research, or any field where rigorous measurement is unevenly distributed.

Drill-down — full research surface

Seven-slot baseline. Forthcoming slots shown openly.

Reports

The actual research findings — phased results, research-question briefs, applied analyses.

Audience tiers

The same headline research surfaced four ways: peer-review, engineering, general audience, product.

  • General-audience explainer

    General audience · forthcoming

    Public framing of the survey-as-instrument argument — what builders, practitioners, and researchers can do with a queryable, source-graded measurement registry that they could not do before.

  • Peer-review framing

    Peer-review framing · forthcoming

    Positioning against the existing measurement-handbook tradition (Schmitt & Highhouse 2013; Borman et al. 2003) and the meta-analytic synthesis tradition (Hunter & Schmidt 2004; Cooper 2017). What Principia adds, what it does not claim to add.

  • Engineering critique

    Engineering critique · forthcoming

    Engineering reviewer's lens — schema discipline, the ETL pipeline, the verification-log infrastructure, the hub-and-spoke @measurement/core story. How the registry is built and where it can fail.

  • Product implications

    Product implications · forthcoming

    What the registry tells us to build next — on the People Analytics Platform, on Vela, in the toolbox/hub. Which constructs unlock which platform features once their rows exist.

Preregistrations & protocols

Studies and intervention protocols filed before execution.

  • Preregistration(s)

    forthcoming

    Synthesis-analytic preregistrations land here as construct-family surveys surface meta-analytic gaps. None filed yet — first will land alongside the engagement survey if a meta-analytic gap appears.