Title: | Evaluating Bias and Precision in Method Comparison Studies |
Version: | 1.1.0 |
Description: | Evaluate bias and precision in method comparison studies. One provides measurements for each method and it takes care of the estimates. Multiple plots to evaluate bias, precision and compare methods. |
License: | GPL (≥ 3) |
Encoding: | UTF-8 |
RoxygenNote: | 7.3.1 |
Imports: | estimatr, graphics, lme4, Matrix, mfp, rockchalk, stats |
Depends: | R (≥ 2.10) |
LazyData: | true |
URL: | https://github.com/UBERLULU/MethodCompare |
BugReports: | https://github.com/UBERLULU/MethodCompare/issues |
NeedsCompilation: | no |
Packaged: | 2025-02-09 13:50:10 UTC; Thomas |
Author: | Thomas Blomet [aut, cre], Mingkai Peng [aut], Patrick Taffé [aut], Tyler Williamson [aut] |
Maintainer: | Thomas Blomet <thomas.blomet@alumni.epfl.ch> |
Repository: | CRAN |
Date/Publication: | 2025-02-09 14:00:02 UTC |
Evaluating Bias and Precision in Method Comparison Studies
Description
The package "MethodCompare" allows one to assess bias, precision and agreement of a new measurement method with respect to a reference method (also called "reference standard"). It requires repeated measurements by at least one of the two measurement methods.
In this implementation, it is assumed by default that the reference method has repeated measurements and the new method may have as few as only one measurement per individual (The methodology can be adapted if you have more repeated measurements by the new method than by the reference method, see ref. below).
It implements the methodology developped in:
Taffé P. Effective plots to assess bias and precision in method comparison studies. Stat Methods Med Res 2018;27:1650-1660.
Taffé P. Assessing bias, precision, and agreement in method comparison studies. Stat Methods Med Res 2020;29:778-796.
For other relevant references:
Blomet T, Taffé P, MethodCompare: An extended suite of R commands to assess bias, precision, and agreement in method comparison studies. To be published...
Taffé P, Peng M, Stagg V, Williamson T. Biasplot: A package to effective plots to assess bias and precision in method comparison studies. Stata J 2017;17:208-221.
Taffé P, Peng M, Stagg V, Williamson T. MethodCompare: An R package to assess bias and precision in method comparison studies. Stat Methods Med Res 2019;28:2557-2565.
Taffé P, Halfon P, Halfon M. A new statistical methodology to assess bias and precision overcomes the defects of the Bland & Altman method. J Clin Epidemiol 2020;124:1-7.
Taffé P. When can the Bland-Altman limits of agreement method be used and when it should not be used. J Clin Epidemiol 2021; 137:176-181.
Taffé P, Peng M, Stagg V, Williamson T. Extended biasplot command to assess bias, precision, and agreement in method comparison studies. Stata J 2023;23:97-118.
Details
The functions implemented in this package are the following:
-
agreement0: Plot the agreement before recalibration
-
agreement1: Plot the agreement after recalibration
-
bias_plot: Plot the bias and measurements
-
compare_plot: Plot used to visualize the recalibration of the new method after estimating the bias
-
measure_compare: Estimation of the amount of bias of the new measurement method relative to the reference method
-
mse: Plot the mean squared errors
-
pct_agreement0: Plot the percentage agreement before recalibration
-
pct_agreement1: Plot the percentage agreement after recalibration
-
precision_plot: Plot the precision of the methods
-
sqrt_mse: Plot the square root of the mean squared errors
-
total_bias_plot: Plot total bias
Author(s)
Maintainer: Thomas Blomet thomas.blomet@alumni.epfl.ch
Authors:
Mingkai Peng
Patrick Taffé patrick.taffe@unisante.ch
Tyler Williamson
See Also
Useful links:
Report bugs at https://github.com/UBERLULU/MethodCompare/issues
Plot the agreement before recalibration
Description
This function draws the "agreement plot" before recalibration, which is used
to visually appraise the degree of agreement between the new and reference
methods, before recalibration of the new method.
It is obtained by graphing a scatter plot of y1-y2
(difference of the methods)
versus the BLUP of the latent trait, x
, along with the bias and 95% limits
of agreement with their 95% simultaneous confidence bands.
The function adds a second scale on the right axis, showing the percentage
of agreement index.
Usage
agreement0(object, rarea = FALSE)
Arguments
object |
list returned by measure_compare function. |
rarea |
if |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the agreement without recalibration
agreement0(measure_model)
Plot the agreement after recalibration
Description
This function draws the "agreement plot" after recalibration, which is used
to visually appraise the degree of agreement between the new and reference
methods, before recalibration of the new method.
It is obtained by graphing a scatter plot of y1-y2
(difference of the methods)
versus the BLUP of the latent trait, x
, along with the bias and 95% limits
of agreement with their 95% simultaneous confidence bands.
The function adds a second scale on the right axis, showing the percentage
of agreement index.
Usage
agreement1(object, rarea = FALSE)
Arguments
object |
list returned by measure_compare function. |
rarea |
if |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the agreement after recalibration
agreement0(measure_model)
Plot the bias and measurements
Description
This function draws the "bias plot", which is used to visually assess the
bias of the new method relative to the reference method. It is obtained by
graphing a scatter plot of y1
(new method) and y2
(reference method) versus
the BLUP of the latent trait, x
, along with the two regression lines.
The function adds a second scale on the right axis, showing the relationship
between the estimated amount of bias and BLUP of the latent trait, x
.
Usage
bias_plot(object)
Arguments
object |
list returned by measure_compare function. |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the bias
bias_plot(measure_model)
Plot used to visualize the recalibration of the new method after estimating the bias
Description
This function allows the visualization of the bias-corrected values (i.e. recalibrated values, variable y1_corr) of the new measurement method.
Usage
compare_plot(object)
Arguments
object |
list returned by measure_compare function. |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the bias
compare_plot(measure_model)
Simulated dataset 1
Description
In the simulated dataset 1, each subject has 1 to 3 measurement values from the new method and 10 to 20 measurement values from the reference method. Compared to the reference method, the new method has differential bias of 4 and proportional bias of 0.8. Variance of the new method is smaller than that for the reference method.
Usage
data1
Format
data1
An object of class data.frame
with 1468 rows and 3 columns
Details
A data frame with 3 variables:
id
identification number for subjects
y1
values from the new measuremment method
y2
values from the reference method
Dataset 1 was created based on the following equations:
y_{1i}=4+0.8x_i+\varepsilon_{1i},\quad \varepsilon_{1i} \mid x_i \sim
N(0,(0.2x_i)^2)
y_{2i}=x_i+\varepsilon_{2i},\quad \varepsilon_{2i} \mid x_i \sim
N(0,(1.75+0.08x_i)^2)
x_i\sim Uniform[25-45]
for i=1,\ldots,100
and the number of repeated measurements for each
subject i
from the reference standard was n_{2i} \sim Uniform[10,20]
and n_{1i} \sim Uniform[1,3]
for the new measurement method.
Simulated dataset 2
Description
In the simulated dataset 2, each subject has 10 to 20 measurement values from the new method and 10 to 20 measurement values from the reference method. Compared to the reference method, the new method has differential bias of 4 and proportional bias of 0.8. Variance of the new method is smaller than that for the reference method.
Usage
data2
Format
data2
An object of class data.frame
with 1680 rows and 3 columns
Details
A data frame with 3 variables:
id
identification number for subjects
y1
values from the new measuremment method
y2
values from the reference method
Dataset 1 was created based on the following equations:
y_{1i}=4+0.8x_i+\varepsilon_{1i},\quad \varepsilon_{1i} \mid x_i \sim
N(0,(0.2x_i)^2)
y_{2i}=x_i+\varepsilon_{2i},\quad \varepsilon_{2i} \mid x_i \sim
N(0,(1.75+0.08x_i)^2)
x_i\sim Uniform[20-100]
for i=1,\ldots,100
and the number of repeated measurements for each
subject i
from the reference standard was n_{2i} \sim Uniform[10,20]
and n_{1i} \sim Uniform[10,20]
for the new measurement method.
Simulated dataset 3
Description
In the simulated dataset 3, each subject has 10 to 20 measurement values from the new method and 10 to 20 measurement values from the reference method. Compared to the reference method, the new method has differential bias of 1 and proportional bias of 0.9. Variance of the new method is smaller than that for the reference method.
Usage
data3
Format
data3
An object of class data.frame
with 1682 rows and 3 columns
Details
A data frame with 3 variables:
id
identification number for subjects
y1
values from the new measuremment method
y2
values from the reference method
Dataset 1 was created based on the following equations:
y_{1i}=1+0.9x_i+\varepsilon_{1i},\quad \varepsilon_{1i} \mid x_i \sim
N(0,(1+0.04x_i)^2)
y_{2i}=x_i+\varepsilon_{2i},\quad \varepsilon_{2i} \mid x_i \sim
N(0,(1.75+0.08x_i)^2)
x_i\sim Uniform[20-100]
for i=1,\ldots,100
and the number of repeated measurements for each
subject i
from the reference standard was n_{2i} \sim Uniform[10,20]
and n_{1i} \sim Uniform[10,20]
for the new measurement method.
Estimation of the amount of bias of the new measurement method relative to the reference method
Description
measure_compare()
implements the methodology reported in the paper:
Taffé P. Effective plots to assess bias and precision in method comparison
studies. Stat Methods Med Res 2018;27:1650-1660. Other relevant references:
Taffé P, Peng M, Stagg V, Williamson T. Biasplot: A package to effective
plots to assess bias and precision in method comparison studies.
Stata J 2017;17:208-221. Taffé P, Peng M, Stagg V, Williamson T.
MethodCompare: An R package to assess bias and precision in method
comparison studies. Stat Methods Med Res 2019;28:2557-2565.
Taffé P, Halfon P, Halfon M. A new statistical methodology to assess bias
and precision overcomes the defects of the Bland & Altman method.
J Clin Epidemiol 2020;124:1-7. Taffé P. Assessing bias, precision, and
agreement in method comparison studies. Stat Methods Med Res 2020;29:778-796.
Taffé P. When can the Bland-Altman limits of agreement method be used and
when it should not be used. J Clin Epidemiol 2021; 137:176-181.
Usage
measure_compare(
data,
new = "y1",
ref = "y2",
id = "id",
nb_simul = 1000,
if_value = NULL
)
Arguments
data |
a required data frame containing the identification number of the
subject ( |
new |
an optional string. The column name containing the measurements of the new measurement method. |
ref |
an optional string. The column name containing the measurements of the reference method (at least two measurements per subject). |
id |
an optional string. The column name containing the subject identification numbers. |
nb_simul |
an optional number. The number of simulations used for simultaneous confidence bands. |
if_value |
an optional number. Restrict the study to observed
measurement greater than the provided value, i.e., |
Value
The function returns a list with the following items:
-
models
: a list of models fitted in estimation procedure -
data
: the original data frame with renamed columns and additional computed data -
sim_params
: estimated model coefficients used afterward -
nb_simul
: the number of simulations used for confidence bands simulations -
bias
: differential and proportional biases for new method and the associated 95 percent confidence intervals -
methods
: a list of methods names provided by the user
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
Plot the mean squared errors
Description
This function draws the "MSE plot", which is used to compare the precision of
the two measurement methods without recalibrating the new method.
It is obtained by graphing the mean squared errors of y1
(new method) and y2
(reference
method) versus the BLUP of the latent trait, x
, along with their 95%
simultaneous confidence bands.
Usage
mse(object, rarea = FALSE)
Arguments
object |
list returned by measure_compare function. |
rarea |
if |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the mean squared errors
mse(measure_model)
Plot the percentage agreement before recalibration
Description
This function draws the "percentage agreement plot" before recalibration,
which shows the amount of percentage agreement.
It is obtained by graphing the percentage agreement index before recalibration
versus the BLUP of the latent trait, x
, along with its 95% simultaneous
confidence bands.
Usage
pct_agreement0(object)
Arguments
object |
list returned by measure_compare function. |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the percentage agreement without recalibration
pct_agreement0(measure_model)
Plot the percentage agreement after recalibration
Description
This function draws the "percentage agreement plot" after recalibration,
which shows the amount of percentage agreement.
It is obtained by graphing the percentage agreement index after recalibration
versus the BLUP of the latent trait, x
, along with its 95% simultaneous
confidence bands.
Usage
pct_agreement1(object)
Arguments
object |
list returned by measure_compare function. |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the percentage agreement after recalibration
pct_agreement0(measure_model)
Plot the precision of the methods
Description
This function draws the "precision plot", which allows the visual comparison
of the precision (i.e. standard deviation) of the new measurement method with
the reference standard by creating a scatter plot of the estimated standard
deviations, along with their 95% simultaneous confidence bands, against the
best linear prediction (BLUP) of the true latent trait, x
.
Usage
precision_plot(object, object2 = NULL, log = FALSE, rarea = FALSE)
Arguments
object |
list returned by measure_compare function. |
object2 |
(optional) returned by measure_compare function. If provided, will plot a second precision estimate. |
log |
if |
rarea |
if |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the precision of the two methods
precision_plot(measure_model)
Plot the square root of the mean squared errors
Description
This function draws the "sqrt(MSE) plot", which is used to compare the precision of
the two measurement methods without recalibrating the new method.
It is obtained by graphing the square root mean squared errors of y1
(new method) and y2
(reference
method) versus the BLUP of the latent trait, x
, along with their 95%
simultaneous confidence bands.
Usage
sqrt_mse(object, rarea = FALSE)
Arguments
object |
list returned by measure_compare function. |
rarea |
if |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the square root mean squared errors
sqrt_mse(measure_model)
Plot total bias
Description
This function draws the "total bias plot", which is used to visually assess
the amount of bias.
It is obtained by graphing the bias
versus the BLUP of the latent trait,
x
, along with the 95% simultaneous confidence bands.
Usage
total_bias_plot(object, object2 = NULL, rarea = FALSE)
Arguments
object |
list returned by measure_compare function. |
object2 |
(optional) returned by measure_compare function. If provided, will plot a second total bias estimate. |
rarea |
if |
Examples
### Load the data
data(data1)
### Analysis
measure_model <- measure_compare(data1, nb_simul=100)
### Plot the total bias
total_bias_plot(measure_model)