mfrmr

GitHub R-CMD-check pkgdown test-coverage License: MIT

Native R package for many-facet Rasch model (MFRM) estimation, diagnostics, and reporting workflows.

What this package is for

mfrmr is designed around four package-native routes:

If you want the shortest possible recommendation:

Features

Documentation map

The README is only the shortest map. The package now has guide-style help pages for the main workflows.

Companion vignettes:

Installation

# GitHub (development version)
if (!requireNamespace("remotes", quietly = TRUE)) install.packages("remotes")
remotes::install_github("Ryuya-dot-com/R_package_mfrmr", build_vignettes = TRUE)

# CRAN (after release)
# install.packages("mfrmr")

If you install from GitHub without build_vignettes = TRUE, use the guide-style help pages included in the package, for example:

Installed vignettes:

browseVignettes("mfrmr")

Core workflow

fit_mfrm() --> diagnose_mfrm() --> reporting / advanced analysis
                    |
                    +--> analyze_residual_pca()
                    +--> estimate_bias()
                    +--> analyze_dff()
                    +--> compare_mfrm()
                    +--> run_qc_pipeline()
                    +--> anchor_to_baseline() / detect_anchor_drift()
  1. Fit model: fit_mfrm()
  2. Diagnostics: diagnose_mfrm()
  3. Optional residual PCA: analyze_residual_pca()
  4. Optional interaction bias: estimate_bias()
  5. Differential-functioning analysis: analyze_dff(), dif_report()
  6. Model comparison: compare_mfrm()
  7. Reporting: apa_table(), build_apa_outputs(), build_visual_summaries()
  8. Quality control: run_qc_pipeline()
  9. Anchoring & linking: anchor_to_baseline(), detect_anchor_drift(), build_equating_chain()
  10. Compatibility audit when needed: facets_parity_report()
  11. Reproducible inspection: summary() and plot(..., draw = FALSE)

Choose a route

Use the route that matches the question you are trying to answer.

Question Recommended route
Can I fit the model and get a first-pass diagnosis quickly? fit_mfrm() -> diagnose_mfrm() -> plot_qc_dashboard()
Which reporting elements are draft-complete, and with what caveats? diagnose_mfrm() -> precision_audit_report() -> reporting_checklist()
Which tables and prose should I adapt into a manuscript draft? reporting_checklist() -> build_apa_outputs() -> apa_table()
Is the design connected well enough for a common scale? subset_connectivity_report() -> plot(..., type = "design_matrix")
Do I need to place a new administration onto a baseline scale? make_anchor_table() -> anchor_to_baseline()
Are common elements stable across separately fitted forms or waves? fit each wave -> detect_anchor_drift() -> build_equating_chain()
Are some facet levels functioning differently across groups? subset_connectivity_report() -> analyze_dff() -> dif_report()
Do I need old fixed-width or wrapper-style outputs? run_mfrm_facets() or build_fixed_reports() only at the compatibility boundary

Start here

If you are new to the package, these are the three shortest useful routes.

Shared setup used by the snippets below:

library(mfrmr)
toy <- load_mfrmr_data("example_core")

1. Quick first pass

fit <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score",
                method = "MML", model = "RSM", quad_points = 7)
diag <- diagnose_mfrm(fit, residual_pca = "none")
summary(diag)
plot_qc_dashboard(fit, diagnostics = diag, preset = "publication")

2. Design and linking check

diag <- diagnose_mfrm(fit, residual_pca = "none")
sc <- subset_connectivity_report(fit, diagnostics = diag)
summary(sc)
plot(sc, type = "design_matrix", preset = "publication")
plot_wright_unified(fit, preset = "publication", show_thresholds = TRUE)

3. Manuscript and reporting check

# Add `bias_results = ...` if you want the bias/reporting layer included.
chk <- reporting_checklist(fit, diagnostics = diag)
apa <- build_apa_outputs(fit, diag)

chk$checklist[, c("Section", "Item", "DraftReady", "NextAction")]
cat(apa$report_text)

Estimation choices

The package treats MML and JML differently on purpose.

Typical pattern:

toy <- load_mfrmr_data("example_core")

fit_fast <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score",
                     method = "JML", model = "RSM", maxit = 50)
fit_final <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score",
                      method = "MML", model = "RSM", quad_points = 15)

diag_final <- diagnose_mfrm(fit_final, residual_pca = "none")
precision_audit_report(fit_final, diagnostics = diag_final)

Help-page navigation

Core analysis help pages include practical sections such as:

Recommended entry points:

Utility pages such as ?export_mfrm, ?as.data.frame.mfrm_fit, and ?plot_bubble also include lightweight export / plotting examples.

Performance tips

Documentation datasets

Quick start

library(mfrmr)

data("mfrmr_example_core", package = "mfrmr")
df <- mfrmr_example_core

# Fit
fit <- fit_mfrm(
  data = df,
  person = "Person",
  facets = c("Rater", "Criterion"),
  score = "Score",
  method = "MML",
  model = "RSM",
  quad_points = 7
)
summary(fit)

# Fast diagnostics first
diag <- diagnose_mfrm(fit, residual_pca = "none")
summary(diag)

# APA outputs
apa <- build_apa_outputs(fit, diag)
cat(apa$report_text)

# QC pipeline reuses the same diagnostics object
qc <- run_qc_pipeline(fit, diagnostics = diag)
summary(qc)

Main objects you will reuse

Most package workflows reuse a small set of objects rather than recomputing everything from scratch.

Typical reuse pattern:

toy <- load_mfrmr_data("example_core")

fit <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score",
                method = "MML", model = "RSM", quad_points = 7)
diag <- diagnose_mfrm(fit, residual_pca = "none")
chk <- reporting_checklist(fit, diagnostics = diag)
apa <- build_apa_outputs(fit, diag)
sc <- subset_connectivity_report(fit, diagnostics = diag)

Reporting and APA route

If your endpoint is a manuscript or internal report, use the package-native reporting contract rather than composing text by hand.

diag <- diagnose_mfrm(fit, residual_pca = "none")

# Add `bias_results = ...` to either helper when bias screening should
# appear in the checklist or draft text.
chk <- reporting_checklist(fit, diagnostics = diag)
chk$checklist[, c("Section", "Item", "DraftReady", "Priority", "NextAction")]

apa <- build_apa_outputs(
  fit,
  diag,
  context = list(
    assessment = "Writing assessment",
    setting = "Local scoring study",
    scale_desc = "0-4 rubric scale",
    rater_facet = "Rater"
  )
)

cat(apa$report_text)
apa$section_map[, c("SectionId", "Available", "Heading")]

tbl_fit <- apa_table(fit, which = "summary")
tbl_reliability <- apa_table(fit, which = "reliability", diagnostics = diag)

For a question-based map of the reporting API, see help("mfrmr_reporting_and_apa", package = "mfrmr").

Visualization recipes

If you want a question-based map of the plotting API, see help("mfrmr_visual_diagnostics", package = "mfrmr").

# Wright map with shared targeting view
plot(fit, type = "wright", preset = "publication", show_ci = TRUE)

# Pathway map with dominant-category strips
plot(fit, type = "pathway", preset = "publication")

# Linking design matrix
sc <- subset_connectivity_report(fit, diagnostics = diag)
plot(sc, type = "design_matrix", preset = "publication")

# Unexpected responses
plot_unexpected(fit, diagnostics = diag, preset = "publication")

# Displacement screening
plot_displacement(fit, diagnostics = diag, preset = "publication")

# Facet variability overview
plot_facets_chisq(fit, diagnostics = diag, preset = "publication")

# Residual PCA scree and loadings
pca <- analyze_residual_pca(diag, mode = "both")
plot_residual_pca(pca, mode = "overall", plot_type = "scree", preset = "publication")

# Bias screening profile
bias <- estimate_bias(fit, diag, facet_a = "Rater", facet_b = "Criterion")
plot_bias_interaction(bias, plot = "facet_profile", preset = "publication")

# One-page QC screen
plot_qc_dashboard(fit, diagnostics = diag, preset = "publication")

Linking, anchors, and DFF route

Use this route when your design spans forms, waves, or subgroup comparisons.

data("mfrmr_example_bias", package = "mfrmr")
df_bias <- mfrmr_example_bias
fit_bias <- fit_mfrm(df_bias, "Person", c("Rater", "Criterion"), "Score",
                     method = "MML", model = "RSM", quad_points = 7)
diag_bias <- diagnose_mfrm(fit_bias, residual_pca = "none")

# Connectivity and design coverage
sc <- subset_connectivity_report(fit_bias, diagnostics = diag_bias)
summary(sc)
plot(sc, type = "design_matrix", preset = "publication")

# Anchor export from a baseline fit
anchors <- make_anchor_table(fit_bias, facets = "Criterion")
head(anchors)

# Differential facet functioning
dff <- analyze_dff(
  fit_bias,
  diag_bias,
  facet = "Criterion",
  group = "Group",
  data = df_bias,
  method = "residual"
)
dff$summary
plot_dif_heatmap(dff)

For linking-specific guidance, see help("mfrmr_linking_and_dff", package = "mfrmr").

DFF / DIF analysis

data("mfrmr_example_bias", package = "mfrmr")
df_bias <- mfrmr_example_bias
fit_bias <- fit_mfrm(df_bias, "Person", c("Rater", "Criterion"), "Score",
                     method = "MML", model = "RSM", quad_points = 7)
diag_bias <- diagnose_mfrm(fit_bias, residual_pca = "none")

dff <- analyze_dff(fit_bias, diag_bias, facet = "Criterion",
                   group = "Group", data = df_bias, method = "residual")
dff$dif_table
dff$summary

# Cell-level interaction table
dit <- dif_interaction_table(fit_bias, diag_bias, facet = "Criterion",
                             group = "Group", data = df_bias)

# Visual, narrative, and bias reports
plot_dif_heatmap(dff)
dr <- dif_report(dff)
cat(dr$narrative)

# Refit-based contrasts can support ETS labels only when subgroup linking is adequate
dff_refit <- analyze_dff(fit_bias, diag_bias, facet = "Criterion",
                         group = "Group", data = df_bias, method = "refit")
dff_refit$summary

bias <- estimate_bias(fit_bias, diag_bias, facet_a = "Rater", facet_b = "Criterion")
summary(bias)

# App-style batch bias estimation across all modeled facet pairs
bias_all <- estimate_all_bias(fit_bias, diag_bias)
bias_all$summary

Interpretation rules:

Model comparison

fit_rsm <- fit_mfrm(df, "Person", c("Rater", "Criterion"), "Score",
                     method = "MML", model = "RSM")
fit_pcm <- fit_mfrm(df, "Person", c("Rater", "Criterion"), "Score",
                     method = "MML", model = "PCM", step_facet = "Criterion")
cmp <- compare_mfrm(RSM = fit_rsm, PCM = fit_pcm)
cmp$table

# Request nested tests only when models are truly nested and fit on the same basis
cmp_nested <- compare_mfrm(RSM = fit_rsm, PCM = fit_pcm, nested = TRUE)
cmp_nested$comparison_basis

# RSM design-weighted precision curves
info <- compute_information(fit_rsm)
plot_information(info)

Design simulation

spec <- build_mfrm_sim_spec(
  n_person = 50,
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  assignment = "rotating",
  model = "RSM"
)

sim_eval <- evaluate_mfrm_design(
  n_person = c(30, 50, 80),
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  reps = 2,
  maxit = 15,
  sim_spec = spec,
  seed = 123
)

s_sim <- summary(sim_eval)
s_sim$design_summary
s_sim$ademp

rec <- recommend_mfrm_design(sim_eval)
rec$recommended

plot(sim_eval, facet = "Rater", metric = "separation", x_var = "n_person")
plot(sim_eval, facet = "Criterion", metric = "severityrmse", x_var = "n_person")

Notes:

Population forecast

spec_pop <- build_mfrm_sim_spec(
  n_person = 50,
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  assignment = "rotating",
  model = "RSM"
)

pred_pop <- predict_mfrm_population(
  sim_spec = spec_pop,
  n_person = 60,
  reps = 2,
  maxit = 15,
  seed = 123
)

s_pred <- summary(pred_pop)
s_pred$forecast[, c("Facet", "MeanSeparation", "McseSeparation")]

Notes:

Future-unit posterior scoring

toy_pred <- load_mfrmr_data("example_core")
toy_fit <- fit_mfrm(
  toy_pred,
  "Person", c("Rater", "Criterion"), "Score",
  method = "MML",
  quad_points = 7
)

raters <- unique(toy_pred$Rater)[1:2]
criteria <- unique(toy_pred$Criterion)[1:2]

new_units <- data.frame(
  Person = c("NEW01", "NEW01", "NEW02", "NEW02"),
  Rater = c(raters[1], raters[2], raters[1], raters[2]),
  Criterion = c(criteria[1], criteria[2], criteria[1], criteria[2]),
  Score = c(2, 3, 2, 4)
)

pred_units <- predict_mfrm_units(toy_fit, new_units, n_draws = 0)
summary(pred_units)$estimates[, c("Person", "Estimate", "Lower", "Upper")]

pv_units <- sample_mfrm_plausible_values(
  toy_fit,
  new_units,
  n_draws = 3,
  seed = 123
)
summary(pv_units)$draw_summary[, c("Person", "Draws", "MeanValue")]

Notes:

Prediction-aware bundle export

bundle_pred <- export_mfrm_bundle(
  fit = toy_fit,
  population_prediction = pred_pop,
  unit_prediction = pred_units,
  plausible_values = pv_units,
  output_dir = tempdir(),
  prefix = "mfrmr_prediction_bundle",
  include = c("manifest", "predictions", "html"),
  overwrite = TRUE
)

bundle_pred$summary

Notes:

DIF / Bias screening simulation

spec_sig <- build_mfrm_sim_spec(
  n_person = 50,
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  assignment = "rotating",
  group_levels = c("A", "B")
)

sig_eval <- evaluate_mfrm_signal_detection(
  n_person = c(30, 50, 80),
  n_rater = 4,
  n_criterion = 4,
  raters_per_person = 2,
  reps = 2,
  dif_effect = 0.8,
  bias_effect = -0.8,
  maxit = 15,
  sim_spec = spec_sig,
  seed = 123
)

s_sig <- summary(sig_eval)
s_sig$detection_summary
s_sig$ademp

plot(sig_eval, signal = "dif", metric = "power", x_var = "n_person")
plot(sig_eval, signal = "bias", metric = "false_positive", x_var = "n_person")

Notes:

Bundle export

bundle <- export_mfrm_bundle(
  fit_bias,
  diagnostics = diag_bias,
  bias_results = bias_all,
  output_dir = tempdir(),
  prefix = "mfrmr_bundle",
  include = c("core_tables", "checklist", "manifest", "visual_summaries", "script", "html"),
  overwrite = TRUE
)

bundle$written_files

bundle_pred <- export_mfrm_bundle(
  toy_fit,
  output_dir = tempdir(),
  prefix = "mfrmr_prediction_bundle",
  include = c("manifest", "predictions", "html"),
  population_prediction = pred_pop,
  unit_prediction = pred_units,
  plausible_values = pv_units,
  overwrite = TRUE
)

bundle_pred$written_files

replay <- build_mfrm_replay_script(
  fit_bias,
  diagnostics = diag_bias,
  bias_results = bias_all,
  data_file = "your_data.csv"
)

replay$summary

Anchoring and linking

d1 <- load_mfrmr_data("study1")
d2 <- load_mfrmr_data("study2")
fit1 <- fit_mfrm(d1, "Person", c("Rater", "Criterion"), "Score", method = "JML", maxit = 25)
fit2 <- fit_mfrm(d2, "Person", c("Rater", "Criterion"), "Score", method = "JML", maxit = 25)

# Anchored calibration
res <- anchor_to_baseline(d2, fit1, "Person", c("Rater", "Criterion"), "Score")
summary(res)
res$drift

# Drift detection
drift <- detect_anchor_drift(list(Wave1 = fit1, Wave2 = fit2))
summary(drift)
plot_anchor_drift(drift, type = "drift")

# Screened linking chain
chain <- build_equating_chain(list(Form1 = fit1, Form2 = fit2))
summary(chain)
plot_anchor_drift(chain, type = "chain")

Notes:

QC pipeline

qc <- run_qc_pipeline(fit, threshold_profile = "standard")
qc$overall      # "Pass", "Warn", or "Fail"
qc$verdicts     # per-check verdicts
qc$recommendations

plot_qc_pipeline(qc, type = "traffic_light")
plot_qc_pipeline(qc, type = "detail")

# Threshold profiles: "strict", "standard", "lenient"
qc_strict <- run_qc_pipeline(fit, threshold_profile = "strict")

Compatibility layer

Compatibility helpers are still available, but they are no longer the primary route for new scripts.

For the full map, see help("mfrmr_compatibility_layer", package = "mfrmr").

Legacy-compatible one-shot wrapper

run <- run_mfrm_facets(
  data = df,
  person = "Person",
  facets = c("Rater", "Criterion"),
  score = "Score",
  method = "JML",
  model = "RSM"
)
summary(run)
plot(run, type = "fit", draw = FALSE)

Public API map

Model and diagnostics:

Differential functioning and model comparison:

Anchoring and linking:

QC pipeline:

Table/report outputs:

Output terminology:

Plots and dashboards:

Export and data utilities:

Legacy FACETS-style numbered names are internal and not exported.

FACETS reference mapping

See:

Packaged synthetic datasets

Installed at system.file("extdata", package = "mfrmr"):

The same datasets are also packaged in data/ and can be loaded with:

data("ej2021_study1", package = "mfrmr")
# or
df <- load_mfrmr_data("study1")

Current packaged dataset sizes:

Citation

citation("mfrmr")

Acknowledgements

mfrmr has benefited from discussion and methodological input from Dr. Atsushi Mizumoto and Dr. Taichi Yamashita.

mirror server hosted at Truenetwork, Russian Federation.