Version: | 2.1-0 |
Date: | 2022-04-19 |
Title: | Modelling Spatial Extremes |
Depends: | R (≥ 1.8.0) |
Imports: | maps, fields |
Description: | Tools for the statistical modelling of spatial extremes using max-stable processes, copula or Bayesian hierarchical models. More precisely, this package allows (conditional) simulations from various parametric max-stable models, analysis of the extremal spatial dependence, the fitting of such processes using composite likelihoods or least square (simple max-stable processes only), model checking and selection and prediction. Other approaches (although not completely in agreement with the extreme value theory) are available such as the use of (spatial) copula and Bayesian hierarchical models assuming the so-called conditional assumptions. The latter approaches is handled through an (efficient) Gibbs sampler. Some key references: Davison et al. (2012) <doi:10.1214/11-STS376>, Padoan et al. (2010) <doi:10.1198/jasa.2009.tm08577>, Dombry et al. (2013) <doi:10.1093/biomet/ass067>. |
License: | GPL-2 | GPL-3 [expanded from: GPL (≥ 2)] |
URL: | http://spatialextremes.r-forge.r-project.org/ |
NeedsCompilation: | yes |
Packaged: | 2022-04-19 07:58:44 UTC; mathieu |
Author: | Mathieu Ribatet [aut, cre], Richard Singleton [ctb], R Core team [ctb] |
Maintainer: | Mathieu Ribatet <mathieu.ribatet@ec-nantes.fr> |
Repository: | CRAN |
Date/Publication: | 2022-04-19 10:22:42 UTC |
Deviance Information Criterion
Description
This function computes the Deviance Information Criterion (DIC), and related quantities, which is a hierarchical modeling generalization of the Akaike Information Criterion. It is useful for Bayesian model selection.
Usage
DIC(object, ..., fun = "mean")
Arguments
object |
An object of class |
... |
Optional arguments. Not implemented. |
fun |
A chararcter string giving the name of the function to be used to summarize the Markov chain. The default is to consider the posterior mean. |
Details
The deviance is
D(\theta) = -2 \log \pi(y \mid \theta),
where y
are the data, \theta
are the unknown
parameters of the models and \pi(y \mid \theta)
is
the likelihood function. Thus the expected deviance, a measure of how
well the model fits the data, is given by
\overline{D} = {\rm E}_{\theta}[D(\theta)],
while the effective number of parameters is
p_D = \overline{D} - D(\theta^*),
where \theta^*
is point estimate of the posterior
distribution, e.g., the posterior mean. Finally the DIC is given by
{\rm DIC} = p_D + \overline{D}.
In accordance with the AIC, models with smaller DIC should be preferred to models with larger DIC. Roughly speaking, differences of more than 10 might rule out the model with the higher DIC, differences between 5 and 10 are substantial.
Value
A vector of length 3 that returns the DIC, effective number of
parameters eNoP
and an estimate of the expected deviance
Dbar
.
Author(s)
Mathieu Ribatet
References
Spiegelhalter, D. J., Best, N. G., Carlin, B. P. and Van Der Linde, A. (2002) Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society: Series B 64, 583–639.
See Also
Examples
## Generate realizations from the model
n.site <- 15
n.obs <- 35
coord <- cbind(lon = runif(n.site, -10, 10), lat = runif(n.site, -10 , 10))
gp.loc <- rgp(1, coord, "powexp", sill = 4, range = 20, smooth = 1)
gp.scale <- rgp(1, coord, "powexp", sill = 0.4, range = 5, smooth = 1)
gp.shape <- rgp(1, coord, "powexp", sill = 0.01, range = 10, smooth = 1)
locs <- 26 + 0.5 * coord[,"lon"] + gp.loc
scales <- 10 + 0.2 * coord[,"lat"] + gp.scale
shapes <- 0.15 + gp.shape
data <- matrix(NA, n.obs, n.site)
for (i in 1:n.site)
data[,i] <- rgev(n.obs, locs[i], scales[i], shapes[i])
loc.form <- y ~ lon
scale.form <- y ~ lat
shape.form <- y ~ 1
hyper <- list()
hyper$sills <- list(loc = c(1,8), scale = c(1,1), shape = c(1,0.02))
hyper$ranges <- list(loc = c(2,20), scale = c(1,5), shape = c(1, 10))
hyper$smooths <- list(loc = c(1,1/3), scale = c(1,1/3), shape = c(1, 1/3))
hyper$betaMeans <- list(loc = rep(0, 2), scale = c(9, 0), shape = 0)
hyper$betaIcov <- list(loc = solve(diag(c(400, 100))),
scale = solve(diag(c(400, 100))),
shape = solve(diag(c(10), 1, 1)))
## We will use an exponential covariance function so the jump sizes for
## the shape parameter of the covariance function are null.
prop <- list(gev = c(2.5, 1.5, 0.3), ranges = c(40, 20, 20), smooths = c(0,0,0))
start <- list(sills = c(4, .36, 0.009), ranges = c(24, 17, 16), smooths
= c(1, 1, 1), beta = list(loc = c(26, 0), scale = c(10, 0),
shape = c(0.15)))
mc <- latent(data, coord, loc.form = loc.form, scale.form = scale.form,
shape.form = shape.form, hyper = hyper, prop = prop, start = start,
n = 500)
DIC(mc)
The Generalized Extreme Value Distribution
Description
Density, distribution function, quantile function and random generation for the GP distribution with location equal to 'loc', scale equal to 'scale' and shape equal to 'shape'.
Usage
rgev(n, loc = 0, scale = 1, shape = 0)
pgev(q, loc = 0, scale = 1, shape = 0, lower.tail = TRUE)
qgev(p, loc = 0, scale = 1, shape = 0, lower.tail = TRUE)
dgev(x, loc = 0, scale = 1, shape = 0, log = FALSE)
Arguments
x , q |
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. |
loc |
vector of the location parameters. |
scale |
vector of the scale parameters. |
shape |
a numeric of the shape parameter. |
lower.tail |
logical; if TRUE (default), probabilities are |
log |
logical; if TRUE, probabilities p are given as log(p). |
Value
If 'loc', 'scale' and 'shape' are not specified they assume the default values of '0', '1' and '0', respectively.
The GEV distribution function for loc = u
, scale =
\sigma
and shape = \xi
is
G(x) = \exp\left[-\left\{1 + \xi \frac{x - u}{\sigma}
\right\}^{-1 / \xi} \right]
for 1 + \xi ( x - u ) / \sigma > 0
and x > u
, where \sigma > 0
. If
\xi = 0
, the distribution is defined by continuity
corresponding to the Gumbel distribution.
Examples
dgev(0.1)
rgev(100, 1, 2, 0.2)
qgev(seq(0.1, 0.9, 0.1), 1, 0.5, -0.2)
pgev(12.6, 2, 0.5, 0.1)
The Generalized Pareto Distribution
Description
Density, distribution function, quantile function and random generation for the GP distribution with location equal to 'loc', scale equal to 'scale' and shape equal to 'shape'.
Usage
rgpd(n, loc = 0, scale = 1, shape = 0)
pgpd(q, loc = 0, scale = 1, shape = 0, lower.tail = TRUE, lambda = 0)
qgpd(p, loc = 0, scale = 1, shape = 0, lower.tail = TRUE, lambda = 0)
dgpd(x, loc = 0, scale = 1, shape = 0, log = FALSE)
Arguments
x , q |
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. |
loc |
vector of the location parameters. |
scale |
vector of the scale parameters. |
shape |
a numeric of the shape parameter. |
lower.tail |
logical; if TRUE (default), probabilities are |
log |
logical; if TRUE, probabilities p are given as log(p). |
lambda |
a single probability - see the "value" section. |
Value
If 'loc', 'scale' and 'shape' are not specified they assume the default values of '0', '1' and '0', respectively.
The GP distribution function for loc = u
, scale =
\sigma
and shape = \xi
is
G(x) = 1 - \left[ 1 + \frac{\xi (x - u )}{ \sigma } \right] ^ { - 1 /
\xi}
for 1 + \xi ( x - u ) / \sigma > 0
and x > u
, where \sigma > 0
. If
\xi = 0
, the distribution is defined by continuity
corresponding to the exponential distribution.
By definition, the GP distribution models exceedances above a
threshold. In particular, the G
function is a suited
candidate to model
\Pr\left[ X \geq x | X > u \right] = 1 - G(x)
for u
large enough.
However, it may be usefull to model the "non conditional" quantiles,
that is the ones related to \Pr[ X \leq x]
. Using
the conditional probability definition, one have :
\Pr\left[ X \geq x \right] = \left(1 - \lambda\right) \left( 1 +
\xi \frac{x - u}{\sigma}\right)^{-1/\xi}
where \lambda = \Pr[ X \leq u]
.
When \lambda = 0
, the "conditional" distribution
is equivalent to the "non conditional" distribution.
Examples
dgpd(0.1)
rgpd(100, 1, 2, 0.2)
qgpd(seq(0.1, 0.9, 0.1), 1, 0.5, -0.2)
pgpd(12.6, 2, 0.5, 0.1)
##for non conditional quantiles
qgpd(seq(0.9, 0.99, 0.01), 1, 0.5, -0.2, lambda = 0.9)
pgpd(2.6, 2, 2.5, 0.25, lambda = 0.5)
Internal functions and methods for the maxstable package.
Description
A set of functions that should not be used directly by the user or are not documented yet. For methods, user should usually used the generic functions which calls the appropriate method.
Author(s)
Mathieu Ribatet
Analysis of Spatial Extremes
Description
The package SpatialExtremes aims to provide tools for the analysis of spatial extremes. Currently, the package uses the max-stable processes framework for the modelling of spatial extremes.
Max-stable processes are the extension of the extreme value theory to random fields. Consequently, they are good candidate to the analysis of spatial extremes. The strategy used in this package is to fit max-stable processes to data using composite likelihood.
In the future, the package will allow for non-stationarity as well as other approaches to model spatial extremes; namely latent variable and copula based approaches.
A package vignette has been writen to help new users. It can be
viewed, from the R console, by invoking
vignette("SpatialExtremesGuide")
.
Details
The package provides the following main tools:
-
rgp, rmaxstab, rmaxlin, rcopula
: simulates gaussian, max-stable, max-linear and copula based random fields, -
condrgp, condrmaxlin
: conditional simulations for gaussian, max-linear processes, -
fitspatgev
: fits a spatial GEV model to data, -
fitmaxstab
,lsmaxstab
: fits max-stable processes to data, -
latent
: draws a Markov chain from a Bayesian hierarchical model for spatial extremes, -
predict
: allows predictions for fitted max-stable processes, -
map
,condmap
: plot a map for GEV parameter as well as return levels - or conditional return levels -
madogram
,fmadogram
,lmadogram
: are (kind of) variograms devoted to extremes, -
fitextcoeff
: estimates semi-parametrically the extremal coefficient, -
extcoeff
: plots the evolution of the extremal coefficient from a fitted max-stable process, -
rbpspline
: fits a penalized spline with radial basis function, -
gev2frech
,frech2gev
: transform GEV (resp. Frechet) observation to unit Frechet (resp. GEV) ones -
distance
: computes the distance between each pair of locations, -
profile
,profile2d
: computes the profile composite likelihood, -
covariance
,variogram
: computes the covariance/semivariogram function.
Acknowledgement
The development of the package has been financially supported by the Competence Center Environment and Sustainability (CCES) and more precisely within the EXTREMES project.
Author(s)
Mathieu Ribatet
Takeuchi's information criterion
Description
Computes a the Takeuchi's information criterion which is equivalent to the AIC when the model is miss-specified.
Usage
TIC(object, ..., k = 2)
Arguments
object |
An object of class |
... |
Additional objects of class |
k |
Numeric. The penalty per parameter to be used. The case k = 2
(default) correspond to the classical TIC and |
Details
TIC is like AIC so that when comparing models one wants to get the lowest TIC score.
Value
Numeric.
Author(s)
Mathieu Ribatet
References
Gao, X. and Song, P. X.-K. (2009) Composite likelihood Bayesian information criteria for model selection in high dimensional data. Preprint.
Sakamoto, Y., Ishiguro, M., and Kitagawa G. (1986) Akaike Information Criterion Statistics. D. Reidel Publishing Company.
Varin, C. and Vidoni, P. (2005) A note on composite likelihood inference and model selection. Biometrika 92(3):519–528.
See Also
Examples
##Define the coordinate of each location
n.site <- 50
locations <- matrix(runif(2*n.site, 0, 100), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(40, locations, cov.mod = "whitmat", nugget = 0.2, range =
30, smooth = 0.5)
M0 <- fitmaxstab(data, locations, "powexp", fit.marge = FALSE)
M1 <- fitmaxstab(data, locations, "cauchy", fit.marge = FALSE)
TIC(M0, M1)
TIC(M0, M1, k = log(nrow(data)))
Summer/Winter annual maxima/minima temperature in continental US.
Description
Summer maxima/Winter minima temperatures over the years 1911–2010 observed at 424 weather stations located in continental USA.
Usage
data(USHCNTemp)
Format
This data set contains three R objects: 'maxima.summer', 'minima.winter' and 'metadata'. 'maxima.summer' is a 100 by 424 matrix giving the temperature in degrees, each column correspond to one location. 'minima.winter is a 99 by 424 matrix giving the temperature in degrees, each column correspond to one location. 'metadata' is a 424 by 5 data frame giving station identifier, the longitude, latitude, elevation and the state for each station.
Author(s)
Mathieu Ribatet
Examples
data(USHCNTemp)
##require(maps) ## <<-- to plot US borders
maps::map("usa")
plot(metadata[,2:3], pch = 15)
Anova Tables
Description
Computes analysis of deviance for objects of class 'maxstab' or 'spatgev'.
Usage
## S3 method for class 'maxstab'
anova(object, object2, method = "RJ", square = "chol", ...)
## S3 method for class 'spatgev'
anova(object, object2, method = "RJ", square = "chol", ...)
Arguments
object , object2 |
Two objects of class 'maxstab' or 'spatgev'. |
method |
Character string. Must be one of "CB" or "RJ" for the
Chandler and Bate or the Rotnitzky and Jewell approaches
respectively. See function
|
square |
The choice for the matrix square root. This is only useful for the 'CB' method. Must be one of 'chol' (Cholesky) or 'svd' (Singular Value Decomposition). |
... |
Other options to be passed to the |
Details
As ”maxstab” objects are fitted using pairwise likelihood, the model
is misspecified. As a consequence, the likelihood ratio statistic is
no longer \chi^2
distributed. To compute the anova
table, we use the methodology proposed by Rotnitzky and Jewell to
adjust the distribution of the likelihood ratio statistic.
Value
This function returns an object of class anova. These objects represent analysis-of-deviance tables.
Author(s)
Mathieu Ribatet
References
Chandler, R. E. and Bate, S. (2007) Inference for clustered data using the independence loglikelihood Biometrika, 94, 167–183.
Rotnitzky, A. and Jewell, N. (1990) Hypothesis testing of regression parameters in semiparametric generalized linear models for cluster correlated data. Biometrika 77, 485–497.
See Also
fitmaxstab
, fitspatgev
,
profile
, TIC
Examples
##Define the coordinates of each location
n.site <- 30
locations <- matrix(rnorm(2*n.site, sd = sqrt(.2)), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(50, locations, cov.mod = "gauss", cov11 = 100, cov12 =
25, cov22 = 220)
##Now define the spatial model for the GEV parameters
param.loc <- -10 + 2 * locations[,2]
param.scale <- 5 + 2 * locations[,1] + locations[,2]^2
param.shape <- rep(0.2, n.site)
##Transform the unit Frechet margins to GEV
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i],
param.shape[i])
##Define three models for the GEV margins to be fitted
loc.form <- loc ~ lat
scale.form <- scale ~ lon + I(lat^2)
shape.form <- shape ~ lon
M0 <- fitspatgev(data, locations, loc.form, scale.form, shape.form)
M1 <- fitspatgev(data, locations, loc.form, scale.form, shape.form,
shapeCoeff2 = 0)
##Model selection
anova(M0, M1)
anova(M0, M1, method = "CB", square = "svd")
Pairwise empirical and extremal concurrence probabilities
Description
This function computes the pairwise empirical or the pairwise extremal concurrence probability estimates.
Usage
concprob(data, coord, fitted, n.bins, add = FALSE, xlim = c(0,
max(dist)), ylim = c(min(0, concProb), max(1, concProb)), col = 1:2,
which = "kendall", xlab, ylab, block.size = floor(nrow(data)^(1/3)),
plot = TRUE, compute.std.err = FALSE, ...)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
fitted |
An object of class maxstab - usually the output of the
|
n.bins |
The number of bins to be used. If missing, pairwise F-madogram estimates will be computed. |
xlim , ylim |
A numeric vector of length 2 specifying the x/y coordinate ranges. |
col |
The colors used for the points and optionnaly the fitted curve. |
which |
A character string specifying which estimator should be used. Should be one of "emp" (empirical), "boot" (bootstrap version) and "kendall" (kendall based). |
xlab , ylab |
The labels for the x/y-axis (may be missing). |
add |
Logical. If |
block.size |
Integer specifying the block size for the empirical and bootstrap estimator. |
plot |
Logical. If |
compute.std.err |
Logical. If |
... |
Additional options to be passed to the |
Value
This function returns invisibly a matrix containing the pairwise distances and the concurrence probability estimates.
Author(s)
Mathieu Ribatet
References
Dombry, C., Ribatet, M. and Stoev, S. (2017) Probabilities of concurrent extremes. To appear in JASA
See Also
Examples
n.site <- 25
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
n.obs <- 100
data <- rmaxstab(n.obs, locations, cov.mod = "whitmat", nugget = 0, range = 1,
smooth = 1.75)
##Compute the F-madogram
concprob(data, locations)
##Compare the F-madogram with a fitted max-stable process
fitted <- fitmaxstab(data, locations, "whitmat", nugget = 0)
concprob(fitted = fitted)
Maps of concurrence probabilities/expected concurrence cell area
Description
This function produces maps for concurrence probabilities or expected concurrence cell areas.
Usage
concurrencemap(data, coord, which = "kendall", type = "cell", n.grid =
100, col = cm.colors(64), plot = TRUE, plot.border = NULL,
compute.std.err = FALSE, ...)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
which |
A character string specifying which estimator should be used. Should be one of "emp" (empirical), "boot" (bootstrap version) and "kendall" (kendall based). |
type |
Either "cell" for cell areas or a integer between 1 and the number of locations specifying which site should be used as reference location—see Details. |
n.grid |
Integer specifying the size of the prediction grid. |
col |
The colors used to produce the map. |
plot |
Logical. If |
plot.border |
The name of an R function that can be used to plot
the border of the study region. If |
compute.std.err |
Logical. If |
... |
Additional options to be passed to the
|
Value
This function returns invisibly a list with the x and y coordinates and the corresponding values for the estimated concurrence probabilities or expected concurrence cell area.
Author(s)
Mathieu Ribatet
References
Dombry, C., Ribatet, M. and Stoev, S. (2015) Probabilities of concurrent extremes. Submitted
See Also
Examples
##require(maps) ## <<-- to plot US borders
data(USHCNTemp)
coord <- as.matrix(metadata[,2:3])
## Subset the station to have a fast example
n.site <- 30
chosen.site <- sample(nrow(coord), n.site)
coord <- coord[chosen.site,]
maxima.summer <- maxima.summer[,chosen.site]
## Define a function to plot the border
border <- function(add = FALSE) maps::map("usa", add = add)
par(mar = rep(0, 4))
## Produce a pairwise concurrence probability map w.r.t. station number 15
concurrencemap(maxima.summer, coord, type = 15, plot.border = border,
compute.std.err = TRUE)
## Produce the expected concurrence cell area
concurrencemap(maxima.summer, coord, plot.border = border)
Produces a conditional 2D map from a fitted max-stable process
Description
Produces a conditional 2D map from a fitted max-stable process.
Usage
condmap(fitted, fix.coord, x, y, covariates = NULL, ret.per1 = 100,
ret.per2 = ret.per1, col = terrain.colors(64), plot.contour = TRUE,
...)
Arguments
fitted |
An object of class |
fix.coord |
The spatial coordinates of the location from which the conditional quantile is computed. |
x , y |
Numeric vector defining the grid at which the levels are computed. |
covariates |
An array specifying the covariates at each grid
point defined by |
ret.per1 , ret.per2 |
Numerics giving the return period for which the quantile map is plotted. See details. |
col |
A list of colors such as that generated by 'rainbow', 'heat.colors', 'topo.colors', 'terrain.colors' or similar functions. |
plot.contour |
Logical. If |
... |
Several arguments to be passed to the |
Details
The function solves the following equation:
\Pr\left[Z(x_2) > z_2 | Z(x_1) > z_1 \right] =
\frac{1}{T_2}
where z_1 = -1 / \log(1 - 1/T_1)
.
In other words, it computes, given that at location x_1
we
exceed the level z_1
, the levels which is expected to be
exceeded in average every T_2
year.
Value
A plot. Additionally, a list with the details for plotting the map is returned invisibly.
Author(s)
Mathieu Ribatet
See Also
map
, filled.contour
,
heatmap
, heat.colors
,
topo.colors
, terrain.colors
,
rainbow
Examples
##Define the coordinate of each location
n.site <- 30
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(50, locations, cov.mod = "whitmat", nugget = 0, range =
2, smooth = 1)
##Now define the spatial model for the GEV parameters
param.loc <- -10 - 4 * locations[,1] + locations[,2]^2
param.scale <- 5 + locations[,2] + locations[,1]^2 / 10
param.shape <- rep(.2, n.site)
##Transform the unit Frechet margins to GEV
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i],
param.shape[i])
##Define a model for the GEV margins to be fitted
##shape ~ 1 stands for the GEV shape parameter is constant
##over the region
loc.form <- loc ~ lon + I(lat^2)
scale.form <- scale ~ lat + I(lon^2)
shape.form <- shape ~ 1
## 1- Fit a max-stable process
fitted <- fitmaxstab(data, locations, "whitmat", loc.form, scale.form,
shape.form, nugget = 0)
cond.coord <- c(5.1, 5.1)
condmap(fitted, cond.coord, seq(0, 10, length = 25), seq(0,10, length
=25), ret.per1 = 100, ret.per2 = 1.5)
points(t(cond.coord), pch = "*", col = 2, cex = 2)
Conditional simulation of Gaussian random fields
Description
This function generates conditional simulation of Gaussian random fields from the simple kriging predictor.
Usage
condrgp(n, coord, data.coord, data, cov.mod = "powexp", mean = 0, sill =
1, range = 1, smooth = 1, grid = FALSE, control = list())
Arguments
n |
Integer. The number of conditional simulations. |
coord |
A numeric vector or matrix specifying the coordinates
where the process has to be generated. If |
data.coord |
A numeric vector or matrix specifying the coordinates where the process is conditioned. |
data |
A numeric vector giving the conditioning observations. |
cov.mod |
A character string specifying the covariance function family. Must be one of "whitmat", "powexp", "cauchy" or "bessel" for the Whittle-Mater, the powered exponential, the Cauchy or Bessel covariance families. |
mean , sill , range , smooth |
The mean, sill, range and smooth of the Gaussian process. |
grid |
Logical. Does |
control |
A named list passing options to the simulation method
of Gaussian processes — see |
Value
A list with components:
coord |
The coordinates at which the process was simulated; |
cond.sim |
The simulated process; |
data.coord |
The coordinates of the conditioning locations; |
data |
The conditioning observations; |
cov.mod |
The covariance function family; |
grid |
Does |
Author(s)
Mathieu Ribatet
See Also
Examples
## Several conditional simulations
n.site <- 50
n.sim <- 512
x.obs <- runif(n.site, -100, 100)
x.sim <- seq(-100, 100, length = n.sim)
data <- rgp(1, x.obs, "whitmat", sill = 1, range = 10, smooth = 0.75)
sim <- condrgp(5, x.sim, x.obs, data, "whitmat", sill = 1, range =
10, smooth = 0.75)
matplot(x.sim, t(sim$cond.sim), type = "l", lty = 1, xlab = "x", ylab =
expression(Y[cond](x)))
points(x.obs, data, pch = 21, bg = 1)
title("Five conditional simulations")
## Comparison between one conditional simulations and the kriging
## predictor on a grid
x.obs <- matrix(runif(2 * n.site, -100, 100), ncol = 2)
x <- y <- seq(-100, 100, length = 100)
x.sim <- cbind(x, y)
data <- rgp(1, x.obs, "whitmat", sill = 1, range = 50, smooth = 0.75)
krig <- kriging(data, x.obs, x.sim, "whitmat", sill = 1, range = 50,
smooth = 0.75, grid = TRUE)
sim <- condrgp(1, x.sim, x.obs, data, "whitmat", sill = 1, range = 50,
smooth = 0.75, grid = TRUE)
z.lim <- range(c(sim$cond.sim, data, krig$krig.est))
breaks <- seq(z.lim[1], z.lim[2], length = 65)
col <- heat.colors(64)
idx <- as.numeric(cut(data, breaks))
op <- par(mfrow = c(1,2))
image(x, y, krig$krig.est, col = col, breaks = breaks)
points(x.obs, bg = col[idx], pch = 21)
title("Kriging predictor")
image(x, y, sim$cond.sim, col = col, breaks = breaks)
points(x.obs, bg = col[idx], pch = 21)
title("Conditional simulation")
## Note how the background colors of the above points matches the ones
## returned by the image function
par(op)
Conditional simulation of max-linear random fields
Description
This function generates (approximate) conditional simulation of unit Frechet max-linear random fields. It can be used to get approximate conditional simulation for max-stable processes.
Usage
condrmaxlin(n, coord, data.coord, data, cov.mod = "gauss", ..., grid =
FALSE, p = 10000)
Arguments
n |
Integer. The number of conditional simulations. |
coord |
A numeric vector or matrix specifying the coordinates
where the process has to be generated. If |
data.coord |
A numeric vector or matrix specifying the coordinates where the process is conditioned. |
data |
A numeric vector giving the conditioning observations. |
cov.mod |
A character string specifying the max-stable model. See section Details. |
... |
The parameters of the max-stable model. See section Details. |
grid |
Logical. Does |
p |
An integer. The number of unit Frechet random variables used in the max-linear approximation. |
Details
Any unit Frechet max-stable processes \{Z(x)\}
can be
approximated by a unit Frechet max-linear process, i.e.,
Z(x) \approx \max_{j=1, \ldots, p} f_j(x) Z_j,
where f_j
are non-negative deterministic functions,
p
is a sufficiently large integer and Z_j
are
independent unit Frechet random variables. Note that to ensure unit
Frechet margins, the following condition has to be satisfied
\sum_{j=1, \ldots, p} f_j(x) = 1,
for all x
.
Currently only the discretized Smith model is implemented for which
f_j(x) = c(p) \varphi(x - u_j ; \Sigma)
where \varphi(\cdot; \Sigma)
is the zero mean (multivariate) normal density with covariance matrix
\Sigma
, u_j
is a sequence of deterministic
points appropriately chosen and c(p)
is a constant
ensuring unit Frechet margins.
Value
A matrix containing observations from the required max-stable
model. Each column represents one stations. If grid = TRUE
, the
function returns an array of dimension nrow(coord) x nrow(coord) x n.
Warnings
It may happen that some conditional observations are not honored
because the approximation of a max-stable process by a max-linear one
isn't accurate enough! Sometimes taking a larger p
solves the
issue.
Author(s)
Mathieu Ribatet
References
Wang, Y. and Stoev, S. A. (2011) Conditional Sampling for Max-Stable Random Fields. Advances in Applied Probability.
See Also
Examples
## One dimensional conditional simulations
n.cond.site <- 10
cond.coord <- runif(n.cond.site, -10, 10)
data <- rmaxlin(1, cond.coord, var = 3, p = 10000)
x <- seq(-10, 10, length = 250)
cond.sim <- condrmaxlin(5, x, cond.coord, data, var = 3)
matplot(x, t(log(cond.sim)), type = "l", lty = 1, pch = 1)
points(cond.coord, log(data))
## Two dimensional conditional simulation
cond.coord <- matrix(runif(2 * n.cond.site, -10, 10), ncol = 2)
data <- rmaxstab(1, cond.coord, "gauss", cov11 = 4, cov12 = 0, cov22 = 4)
x <- y <- seq(-10, 10, length = 75)
cond.sim <- condrmaxlin(4, cbind(x, y), cond.coord, data, cov11 = 4,
cov12 = 0, cov22 = 4, grid = TRUE, p = 2000)
## Note p is set to 2000 for CPU reasons but is likely to be too small
op <- par(mfrow = c(2, 2), mar = rep(1, 4))
for (i in 1:4){
image(x, y, log(cond.sim[,,i]), col = heat.colors(64), xaxt = "n", yaxt
= "n", bty = "n")
contour(x, y, log(cond.sim[,,i]), add = TRUE)
text(cond.coord[,1], cond.coord[,2], round(log(data), 2), col = 3)
}
par(op)
Conditional simulation of max-stable processes
Description
This function performs conditional simulation of various max-stable processes.
Usage
condrmaxstab(k = 1, coord, cond.coord, cond.data, cov.mod = "powexp",
..., do.sim = TRUE, thin = n.cond, burnin = 50, parts)
Arguments
k |
An integer. The number of conditional simulations to be generated. |
coord |
A vector or matrix that gives the coordinates of each location. Each row corresponds to one location - if any. |
cond.coord |
A vector or matrix that gives the coordinates of each conditional location. Each row corresponds to one location - if any. |
cond.data |
A vector that gives the conditional values at the corresponding conditioning locations. Each row corresponds to one location - if any. |
cov.mod |
A character string that gives the max-stable model. This must be one of "brown" for the Brown-Resnick model, or "whitmat", "cauchy", "powexp" and "bessel" for the Schlather model with the given correlation family. |
... |
The parameters of the max-stable model. See
|
do.sim |
A logical value. If |
thin |
A positive integer giving by which amount the generated Markov chain should be thinned. This is only useful when the number of conditioning locations is greater than 7. |
burnin |
A positive integer giving the duration of the burnin period of the Markov chain. |
parts |
A matrix giving the hitting scenarios. Each row corresponds to one hitting scenarios. If missing then a Gibbs sampler will be used to generate such hitting scenarios. |
Details
The algorithm consists in three steps:
Draw a random partition
\theta
from\Pr\{\theta = \tau \mid Z(x) = z\}
Given the random partition, draw the extremal functions from
\Pr\{\varphi^+ \in \cdot \mid Z(x) = z, \theta = \tau\}
Independently, draw the sub-extremal functions, i.e.,
\max_{i \ge 1} \varphi_i 1_{\{\varphi_i(x) < z\}}
The distribution in Step 1 is usually intractable and in such cases a random scan Gibbs sampler will be used to sample from this distribution.
Value
This function returns a list whose components are
sim |
The conditional simulations. Beware the first values corresponds to the conditioning values. |
sub.ext.fct |
The values of the sub-extremal functions. |
ext.fct |
The values of the extremal functions. |
timings |
The timings in seconds for each step of the algorithm. |
Warning
This function can be extremely time consuming when the number of conditioning locations is large.
Author(s)
Mathieu Ribatet
References
Dombry, C. and Eyi-Minko, F. (2012) Regular conditional distributions of max infinitely divisible processes. Submitted.
Dombry, C., Eyi-Minko, F. and Ribatet, M. (2012) Conditional simulation of max-stable processes. To appear in Biometrika.
See Also
Examples
n.sim <- 50
n.cond <- 5
range <- 10
smooth <- 1.5
n.site <- 200
coord <- seq(-5, 5, length = n.site)
cond.coord <- seq(-4, 4, length = n.cond)
all.coord <- c(cond.coord, coord)
all.cond.data <- rmaxstab(1, all.coord, "powexp", nugget = 0, range = range,
smooth = smooth)
cond.data <- all.cond.data[1:n.cond]
ans <- condrmaxstab(n.sim, coord, cond.coord, cond.data, range = range,
smooth = smooth, cov.mod = "powexp")
idx <- order(all.coord)
matplot(coord, t(log(ans$sim)), type = "l", col = "grey", lty = 1,
xlab = expression(x), ylab = expression(Z(x)))
lines(all.coord[idx], log(all.cond.data)[idx])
points(cond.coord, log(cond.data), pch = 15, col = 2)
Defines and computes covariance functions
Description
This function defines and computes several covariance function either from a fitted “max-stable” model; either by specifying directly the covariance parameters.
Usage
covariance(fitted, nugget, sill, range, smooth, smooth2 = NULL, cov.mod =
"whitmat", plot = TRUE, dist, xlab, ylab, col = 1, ...)
Arguments
fitted |
An object of class “maxstab”. Most often this will be
the output of the |
nugget , sill , range , smooth , smooth2 |
The nugget, sill, scale and smooth parameters
for the covariance function. May be missing if |
cov.mod |
Character string. The name of the covariance
model. Must be one of "whitmat", "cauchy", "powexp", "bessel" or
"caugen" for the Whittle-Matern, Cauchy, Powered Exponential, Bessel
and Generalized Cauchy models. May be missing if |
plot |
Logical. If |
dist |
A numeric vector corresponding to the distance at which the covariance function should be evaluated. May be missing. |
xlab , ylab |
The x-axis and y-axis labels. May be missing. |
col |
The color to be used for the plot. |
... |
Several option to be passed to the |
Details
Currently, four covariance functions are defined: the Whittle-Matern,
powered exponential (also known as "stable"), Cauchy and Bessel
models. These covariance functions are defined as follows for h >
0
- Whittle-Matern
\gamma(h) = \sigma \frac{2^{1-\kappa}}{\Gamma(\kappa)} \left(\frac{h}{\lambda} \right)^{\kappa} K_{\kappa}\left(\frac{h}{\lambda} \right)
- Powered Exponential
\gamma(h) = \sigma \exp \left[- \left(\frac{h}{\lambda} \right)^{\kappa} \right]
- Cauchy
\gamma(h) = \sigma \left[1 + \left(\frac{h}{\lambda} \right)^2 \right]^{-\kappa}
- Bessel
\gamma(h) = \sigma \left(\frac{2 \lambda}{h}\right)^{\kappa} \Gamma(\kappa + 1) J_{\kappa}\left(\frac{h}{\lambda} \right)
- Generalized Cauchy
\gamma(h) = \sigma \left\{1 + \left(\frac{h}{\lambda} \right)^{\kappa_2} \right\}^{-\kappa / \kappa_2}
where \sigma
, \lambda
and
\kappa
are the sill, the range and shape parameters,
\Gamma
is the gamma function,
K_{\kappa}
and J_\kappa
are both
modified Bessel functions of order \kappa
. In addition
a nugget effect can be set that is there is a jump at the origin,
i.e., \gamma(o) = \nu + \sigma
, where
\nu
is the nugget effect.
Value
This function returns the covariance function. Eventually, if
dist
is given, the covariance function is computed for each
distance given by dist
. If plot = TRUE
, the covariance
function is plotted.
Author(s)
Mathieu Ribatet
Examples
## 1- Calling covariance using fixed covariance parameters
covariance(nugget = 0, sill = 1, range = 1, smooth = 0.5, cov.mod = "whitmat")
covariance(nugget = 0, sill = 0.5, range = 1, smooth = 0.5, cov.mod = "whitmat",
dist = seq(0,5, 0.2), plot = FALSE)
## 2- Calling covariance from a fitted model
##Define the coordinate of each location
n.site <- 30
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(30, locations, cov.mod = "whitmat", nugget = 0, range =
3, smooth = 1)
##Fit a max-stable model
fitted <- fitmaxstab(data, locations, "whitmat", nugget = 0)
covariance(fitted, ylim = c(0, 1))
covariance(nugget = 0, sill = 1, range = 3, smooth = 1, cov.mod = "whitmat", add =
TRUE, col = 3)
title("Whittle-Matern covariance function")
legend("topright", c("Theo.", "Fitted"), lty = 1, col = c(3,1), inset =
.05)
Estimates the penalty coefficient from the cross-validation criterion
Description
Estimates the penalty coefficient from the cross-validation criterion.
Usage
cv(y, x, knots, degree, plot = TRUE, n.points = 150, ...)
Arguments
y |
The response vector. |
x |
A vector/matrix giving the values of the predictor
variable(s). If |
knots |
A vector givint the coordinates of the knots. |
degree |
The degree of the penalized smoothing spline. |
plot |
Logical. If |
n.points |
A numeric giving the number of CV computations needed to produce the plot. |
... |
Options to be passed to the |
Details
For every linear smoother e.g. \hat{y} = S_\lambda y
, the cross-validation criterion consists in minimizing
the following quantity:
CV(\lambda) = \sum_{i=1}^n \left(\frac{y_i - \hat{y}_i}{1 -
S_{\lambda,ii}} \right)^2
where \lambda
is the penalty coefficient, n
the
number of observations and S_{\lambda,ii}
the
i-th diagonal element of the matrix S_\lambda
.
Value
A list with components 'penalty', 'cv' and 'nlm.code' which give the
location of the minimum, the value of the cross-validation
criterion at that point and the code returned by the nlm
function - useful to assess for convergence issues.
Author(s)
Mathieu Ribatet
References
Ruppert, D. Wand, M.P. and Carrol, R.J. (2003) Semiparametric Regression Cambridge Series in Statistical and Probabilistic Mathematics.
See Also
Examples
n <- 200
x <- runif(n)
fun <- function(x) sin(3 * pi * x)
y <- fun(x) + rnorm(n, 0, sqrt(0.4))
knots <- quantile(x, prob = 1:(n/4) / (n/4 + 1))
cv(y, x, knots = knots, degree = 3)
Computes distance between pairs of locations
Description
This function computes euclidean distance or distance vector for each pair of a set of spatial locations.
Usage
distance(coord, vec = FALSE)
Arguments
coord |
A matrix representing the coordinates of each locations. Each row corresponds to one location. |
vec |
Logical. If |
Value
If vec = FALSE
, this function returns a vector that gives the
euclidean distance for each pair of locations. Otherwise, this is a
matrix where each column correspond to one dimension -
i.e. longitude, latitude, ...
Author(s)
Mathieu Ribatet
See Also
Examples
coords <- cbind(seq(0, 10, by = 2), seq(10, 0, by = -2))
distance(coords)
distance(coords, vec = TRUE)
Plots the extremal coefficient
Description
Plots the extremal coefficient evolution as the coordinates evolves.
Usage
extcoeff(fitted, cov.mod, param, n = 200, xlab, ylab, ...)
Arguments
fitted |
A object of class " |
cov.mod |
A character string corresponding the the covariance
model in the max-stable representation. Must be one of "gauss" for
the Smith's model; or "whitmat", "cauchy" or "powexp" for
the Whittle-Matern, the Cauchy and the Powered Exponential
covariance family with the Schlather's model. May be missing if
|
param |
Numeric vector of length 3. The parameters for the Smith's or Schlather model - i.e. c(cov11, cov12, cov22) or c(nugget, range, smooth). Please respect this order. |
n |
Numeric. |
xlab , ylab |
The x-axis and y-axis labels. May be missing. |
... |
Several options to be passed to the |
Value
A plot.
Author(s)
Mathieu Ribatet
See Also
Examples
## 1- Random field generation
n.site <- 30
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
data <- rmaxstab(60, locations, cov.mod = "whitmat", nugget = 0, range =
3, smooth = 1)
## 2- Fit a max-stable processes
schlather <- fitmaxstab(data, locations, "whitmat", nugget = 0)
## 3- Plot the extremal coefficient
extcoeff(schlather)
Fit a copula-based model to spatial extremes
Description
This function fits various copula-based models to spatial extremes data sets.
Usage
fitcopula(data, coord, copula = "gaussian", cov.mod = "whitmat",
loc.form, scale.form, shape.form, marg.cov = NULL, temp.cov = NULL,
temp.form.loc = NULL, temp.form.scale = NULL, temp.form.shape = NULL,
..., start, control = list(maxit = 10000), method = "Nelder", std.err =
TRUE, warn = TRUE, corr = FALSE)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
copula |
A character string. Must be one of "gaussian" and "student" for a Gaussian and Student copula. |
cov.mod |
A character string corresponding to the correlation function family used in the copula. Must be one of "whitmat", "cauchy", "powexp", "bessel" or "caugen" for the Whittle-Matern, the Cauchy, the Powered Exponential, the Bessel and the Generalized Cauchy correlation families. |
loc.form , scale.form , shape.form |
R formulas defining the
spatial linear model for the GEV parameters. May be missing. See
section Details of function |
marg.cov |
Matrix with named columns giving additional covariates
for the GEV parameters. If |
temp.cov |
Matrix with names columns giving additional *temporal*
covariates for the GEV parameters. If |
temp.form.loc , temp.form.scale , temp.form.shape |
R formulas
defining the temporal trends for the GEV parameters. May be
missing. See section Details of function |
... |
Several arguments to be passed to the
|
start |
A named list giving the initial values for the
parameters over which the pairwise likelihood is to be minimized. If
|
control |
A list giving the control parameters to be passed to
the |
method |
The method used for the numerical optimisation
procedure. Must be one of |
std.err |
Logical. Should the standard errors be computed ? The
default is to return the standard errors, i.e., |
warn |
Logical. If |
corr |
Logical. If |
Value
This function returns a object of class copula
.
Warning
This function does not use max-stable copula and the use of non max-stable copula for modelling spatial extreme is highly questionable. This function was mainly implemented for educational purposes and not for concrete modelling purposes.
Author(s)
Mathieu Ribatet
References
Davison, A.C., Padoan, S.A., Ribatet, M. (2010) Statistical Modelling of Spatial Extremes. Submitted to Statistical Science.
See Also
Examples
## Not run:
n.site <- 30
n.obs <- 50
coord <- matrix(runif(2 * n.site, -10, 10), ncol = 2)
colnames(coord) <- c("lon", "lat")
## Generate data from a Gaussian copula model
data <- rcopula(n.obs, coord, "gaussian", "powexp", nugget = 0, range = 4, smooth = 1.2)
## Transform the margins to GEV
locs <- -5 + coord[,"lon"] / 10
scales <- 10 + coord[,"lat"] / 2
shapes <- rep(0.2, n.site)
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], locs[i], scales[i], shapes[i])
## Fit a Gaussian copula model
## 1. Define trend surfaces
loc.form <- y ~ lon
scale.form <- y ~ lat
shape.form <- y ~ 1
## 2. Fit
M0 <- fitcopula(data, coord, "gaussian", "powexp", loc.form, scale.form,
shape.form, nugget = 0)
## End(Not run)
Estimates the covariance function for the Schlather's model
Description
Estimates the covariance function for the Schlather's model using non-parametric estimates of the pairwise extremal coefficients.
Usage
fitcovariance(data, coord, cov.mod, marge = "emp", control = list(),
..., start, weighted = TRUE)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
cov.mod |
A character string corresponding the the covariance model in the Schlather's model. Must be one of "whitmat", "cauchy", "powexp", "bessel" or "caugen" for the Whittle-Matern, the Cauchy, the Powered Exponential, the Bessel and the Generalized Cauchy correlation families. |
marge |
Character string specifying how margins are transformed
to unit Frechet. Must be one of "emp", "frech" or "mle" - see
function |
control |
The control arguments to be passed to the
|
... |
Optional arguments to be passed to the |
start |
A named list giving the initial values for the
parameters over which the weighted sum of square is to be
minimized. If |
weighted |
Logical. Should weighted least squares be used? |
Details
The fitting procedure is based on weighted least squares. More precisely, the fitting criteria is to minimize:
\sum_{i,j} \left(\frac{\tilde{\theta}_{i,j} -
\hat{\theta}_{i,j}}{s_{i,j}}\right)^2
where \tilde{\theta}_{i,j}
is a non
parametric estimate of the extremal coefficient related to location
i
and j
, \hat{\theta}_{i,j}
is
the fitted extremal coefficient derived from the Schlather's model and
s_{i,j}
are the standard errors related to the
estimates \tilde{\theta}_{i,j}
.
Value
An object of class maxstab.
Author(s)
Mathieu Ribatet
References
Smith, R. L. (1990) Max-stable processes and spatial extremes. Unpublished manuscript.
See Also
fitcovmat
, fitmaxstab
,
fitextcoeff
Examples
n.site <- 50
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulating a max-stable process using RandomFields
##This is the Schlather's approach
data <- rmaxstab(50, locations, cov.mod = "whitmat", nugget = 0, range =
30, smooth = 1)
fitcovariance(data, locations, "whitmat")
Estimates the covariance matrix for the Smith's model
Description
Estimates the covariance matrix for the Smith's model using non-parametric estimates of the pairwise extremal coefficients.
Usage
fitcovmat(data, coord, marge = "emp", iso = FALSE, control = list(),
..., start, weighted = TRUE)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
marge |
Character string specifying how margins are transformed
to unit Frechet. Must be one of "emp", "frech" or "mle" - see
function |
iso |
Logical. If |
control |
The control arguments to be passed to the
|
... |
Optional arguments to be passed to the |
start |
A named list giving the initial values for the
parameters over which the weighted sum of square is to be
minimized. If |
weighted |
Logical. Should weighted least squares be used? |
Details
The fitting procedure is based on weighted least squares. More precisely, the fitting criteria is to minimize:
\sum_{i,j} \left(\frac{\tilde{\theta}_{i,j} -
\hat{\theta}_{i,j}}{s_{i,j}}\right)^2
where \tilde{\theta}_{i,j}
is a non
parametric estimate of the extremal coefficient related to location
i
and j
, \hat{\theta}_{i,j}
is
the fitted extremal coefficient derived from the Smith's model and
s_{i,j}
are the standard errors related to the
estimates \tilde{\theta}_{i,j}
.
Value
An object of class maxstab.
Author(s)
Mathieu Ribatet
References
Smith, R. L. (1990) Max-stable processes and spatial extremes. Unpublished manuscript.
See Also
fitcovariance
, fitmaxstab
,
fitextcoeff
Examples
n.site <- 50
n.obs <- 100
locations <- matrix(runif(2*n.site, 0, 40), ncol = 2)
colnames(locations) <- c("lon", "lat")
## Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(50, locations, cov.mod = "gauss", cov11 = 200, cov12 =
0, cov22 = 200)
fitcovmat(data, locations)
##Force an isotropic model
fitcovmat(data, locations, iso = TRUE)
Non parametric estimators of the extremal coefficient function
Description
Estimates non parametrically the extremal coefficient function.
Usage
fitextcoeff(data, coord, ..., estim = "ST", marge = "emp", prob = 0,
plot = TRUE, loess = TRUE, method = "BFGS", std.err = TRUE, xlab,
ylab, angles = NULL, identify = FALSE)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
... |
Additional options to be passed to the |
estim |
Character string specifying the estimator to be used. Must be one of "ST" (Schlather and Tawn) or "Smith". |
marge |
Character string specifying how margins are transformed to unit Frechet. Must be one of "emp", "mle" or "none" - see Details |
prob |
The probability related to the threshold. Only useful with
the |
plot |
Logical. If |
loess |
If |
method |
The optimizer used when fitting the GEV distribution to
data. See function |
std.err |
Logical. If |
xlab , ylab |
The x-axis and y-axis labels. May be missing. |
angles |
A numeric vector. A partition of the interval
|
identify |
Logical. If |
Details
During the estimation procedure, data need to be transformed to unit Frechet margins firts. This can be done in two different ways ; by using the empirical CDF or the GEV ML estimates.
If marge = "emp"
, then the data are transformed using the
following relation:
z_i = - \frac{1}{\log (F(y_i))}
where y_i
are the observations available at location
i
, F
is the empirical CDF and z_i
are the
observations transformed to unit Frechet scale.
If marge = "mle"
, then the data are transformed using the MLE
of the GEV distribution - see function gev2frech
.
Lastly, if data are already supposed to be unit Frechet, then no
transformation is performed if one passed the option marge =
"frech"
.
If data
are already componentwise maxima, prob
should be
zero. Otherwise, users have to define a threshold z
(large
enough) where univariate extreme value arguments are relevant. We
define prob
such that \Pr[Z \leq z] = prob
.
Value
Plots the extremal coefficient function and returns the points used
for the plot. If loess = TRUE
, the output is a list with
argument "ext.coeff" and "loess".
Author(s)
Mathieu Ribatet
References
Schlather, M. and Tawn, J. A. (2003) A dependence measure for multivariate and spatial extreme values: Properties and inference. Biometrika 90(1):139–156.
Smith, R. L. (1990) Max-stable processes and spatial extremes. Unpublished manuscript.
See Also
Examples
n.site <- 30
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(50, locations, cov.mod = "gauss", cov11 = 10, cov12 =
40, cov22 = 220)
##Plot the extremal coefficient function
op <- par(mfrow=c(1,2))
fitextcoeff(data, locations, estim = "Smith")
fitextcoeff(data, locations, angles = seq(-pi, pi, length = 4), estim = "Smith")
par(op)
Fits a max-stable process to data
Description
This function fits max-stable processes to data using pairwise likelihood. Two max-stable characterisations are available: the Smith and Schlather representations.
Usage
fitmaxstab(data, coord, cov.mod, loc.form, scale.form,
shape.form, marg.cov = NULL, temp.cov = NULL, temp.form.loc = NULL,
temp.form.scale = NULL, temp.form.shape = NULL, iso = FALSE, ...,
fit.marge = FALSE, warn = TRUE, method = "Nelder", start, control =
list(), weights = NULL, corr = FALSE, check.grad
= FALSE)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
cov.mod |
A character string corresponding to the covariance model in the max-stable representation. Must be one of "gauss" for the Smith's model; "whitmat", "cauchy", "powexp", "bessel" or "caugen" for the Whittle-Matern, the Cauchy, the Powered Exponential, the Bessel and the Generalized Cauchy correlation families with the Schlather's model; "brown" for Brown-Resnick processes. The geometric Gaussian and Extremal-t models with a Whittle-Matern correlation function can be fitted by passing respectively "gwhitmat" or "twhitmat". Other correlation function families are considered in a similar way. |
loc.form , scale.form , shape.form |
R formulas defining the spatial linear model for the GEV parameters. May be missing. See section Details. |
marg.cov |
Matrix with named columns giving additional covariates
for the GEV parameters. If |
temp.cov |
Matrix with names columns giving additional *temporal*
covariates for the GEV parameters. If |
temp.form.loc , temp.form.scale , temp.form.shape |
R formulas defining the temporal trends for the GEV parameters. May be missing. See section Details. |
iso |
Logical. If |
... |
Several arguments to be passed to the
|
fit.marge |
Logical. If |
warn |
Logical. If |
method |
The method used for the numerical optimisation
procedure. Must be one of |
start |
A named list giving the initial values for the
parameters over which the pairwise likelihood is to be minimized. If
|
control |
A list giving the control parameters to be passed to
the |
weights |
A numeric vector specifying the weights in the pairwise
likelihood - and so has length the number of pairs. If |
corr |
Logical. If |
check.grad |
Logical. If |
Details
As spatial data often deal with a large number of locations, it is impossible to write analytically the joint distribution. Consequently, the fitting procedure substitutes the "full likelihood" for the pairwise likelihood.
Let define L_{i,j}(x_{i,j}, \theta)
the
likelihood for site i
and j
, where i = 1,
\dots, N-1
, j = i+1, \dots, N
, N
is the number of site within the region and
x_{i,j}
are the joint observations for site i
and j
. Then the pairwise likelihood
PL(\theta)
is defined by:
\ell_P = \log PL(\theta) = \sum_{i = 1}^{N-1} \sum_{j=i+1}^{N} \log
L_{i,j} (x_{i,j}, \theta)
As pairwise likelihood is an approximation of the “full likelihood”, standard errors cannot be computed directly by the inverse of the Fisher information matrix. Instead, a sandwich estimate must be used to account for model mispecification e.g.
\hat{\theta} \sim N(\theta, H^{-1} J H^{-1})
where H
is the Fisher information matrix (computed from the
misspecified model) and J
is the variance of the score
function.
There are two different kind of covariates : "spatial" and "temporal".
A "spatial" covariate may have different values accross station but
does not depend on time. For example the coordinates of the stations
are obviously "spatial". These "spatial" covariates should be used
with the marg.cov
and loc.form, scale.form, shape.form
.
A "temporal" covariates may have different values accross time but
does not depend on space. For example the years where the annual
maxima were recorded is "temporal". These "temporal" covariates should
be used with the temp.cov
and temp.form.loc,
temp.form.scale, temp.form.shape
.
As a consequence note that marg.cov
must have K rows (K being
the number of sites) while temp.cov
must have n rows (n being
the number of observations).
Value
This function returns a object of class maxstab
. Such objects
are list with components:
fitted.values |
A vector containing the estimated parameters. |
std.err |
A vector containing the standard errors. |
fixed |
A vector containing the parameters of the model that have been held fixed. |
param |
A vector containing all parameters (optimised and fixed). |
deviance |
The (pairwise) deviance at the maximum pairwise likelihood estimates. |
corr |
The correlation matrix. |
convergence , counts , message |
Components taken from the
list returned by |
data |
The data analysed. |
model |
The max-stable characterisation used. |
fit.marge |
A logical that specifies if the GEV margins were estimated. |
cov.fun |
The estimated covariance function - for the Schlather model only. |
extCoeff |
The estimated extremal coefficient function. |
cov.mod |
The covariance model for the spatial structure. |
Warning
When using reponse surfaces to model spatially the GEV parameters, the likelihood is pretty rough so that the general purpose optimization routines may fail. It is your responsability to check if the numerical optimization succeeded or not. I tried, as best as I can, to provide warning messages if the optimizers failed but in some cases, no warning will appear!
Author(s)
Mathieu Ribatet
References
Cox, D. R. and Reid, N. (2004) A note on pseudo-likelihood constructed from marginal densities. Biometrika 91, 729–737.
Demarta, S. and McNeil, A. (2005) The t copula and Related Copulas International Statistical Review 73, 111-129.
Gholam–Rezaee, M. (2009) Spatial extreme value: A composite likelihood. PhD Thesis. Ecole Polytechnique Federale de Lausanne.
Kabluchko, Z., Schlather, M. and de Haan, L. (2009) Stationary max-stable fields associated to negative definite functions Annals of Probability 37:5, 2042–2065.
Padoan, S. A. (2008) Computational Methods for Complex Problems in Extreme Value Theory. PhD Thesis. University of Padova.
Padoan, S. A., Ribatet, M. and Sisson, S. A. (2010) Likelihood-based inference for max-stable processes. Journal of the American Statistical Association (Theory and Methods) 105:489, 263-277.
Schlather, M. (2002) Models for Stationary Max-Stable Random Fields. Extremes 5:1, 33–44.
Smith, R. L. (1990) Max-stable processes and spatial extremes. Unpublished.
Examples
## Not run:
##Define the coordinate of each location
n.site <- 30
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(40, locations, cov.mod = "whitmat", nugget = 0, range = 3,
smooth = 0.5)
##Now define the spatial model for the GEV parameters
param.loc <- -10 + 2 * locations[,2]
param.scale <- 5 + 2 * locations[,1] + locations[,2]^2
param.shape <- rep(0.2, n.site)
##Transform the unit Frechet margins to GEV
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i],
param.shape[i])
##Define a model for the GEV margins to be fitted
##shape ~ 1 stands for the GEV shape parameter is constant
##over the region
loc.form <- loc ~ lat
scale.form <- scale ~ lon + I(lat^2)
shape.form <- shape ~ 1
##Fit a max-stable process using the Schlather's model
fitmaxstab(data, locations, "whitmat", loc.form, scale.form,
shape.form)
## Model without any spatial structure for the GEV parameters
## Be careful this could be *REALLY* time consuming
fitmaxstab(data, locations, "whitmat")
## Fixing the smooth parameter of the Whittle-Matern family
## to 0.5 - e.g. considering exponential family. We suppose the data
## are unit Frechet here.
fitmaxstab(data, locations, "whitmat", smooth = 0.5, fit.marge = FALSE)
## Fitting a penalized smoothing splines for the margins with the
## Smith's model
data <- rmaxstab(40, locations, cov.mod = "gauss", cov11 = 100, cov12 =
25, cov22 = 220)
## And transform it to ordinary GEV margins with a non-linear
## function
fun <- function(x)
2 * sin(pi * x / 4) + 10
fun2 <- function(x)
(fun(x) - 7 ) / 15
param.loc <- fun(locations[,2])
param.scale <- fun(locations[,2])
param.shape <- fun2(locations[,1])
##Transformation from unit Frechet to common GEV margins
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i],
param.shape[i])
##Defining the knots, penalty, degree for the splines
n.knots <- 5
knots <- quantile(locations[,2], prob = 1:n.knots/(n.knots+1))
knots2 <- quantile(locations[,1], prob = 1:n.knots/(n.knots+1))
##Be careful the choice of the penalty (i.e. the smoothing parameter)
##may strongly affect the result Here we use p-splines for each GEV
##parameter - so it's really CPU demanding but one can use 1 p-spline
##and 2 linear models.
##A simple linear model will be clearly faster...
loc.form <- y ~ rb(lat, knots = knots, degree = 3, penalty = .5)
scale.form <- y ~ rb(lat, knots = knots, degree = 3, penalty = .5)
shape.form <- y ~ rb(lon, knots = knots2, degree = 3, penalty = .5)
fitted <- fitmaxstab(data, locations, "gauss", loc.form, scale.form, shape.form,
control = list(ndeps = rep(1e-6, 24), trace = 10),
method = "BFGS")
fitted
op <- par(mfrow=c(1,3))
plot(locations[,2], param.loc, col = 2, ylim = c(7, 14),
ylab = "location parameter", xlab = "latitude")
plot(fun, from = 0, to = 10, add = TRUE, col = 2)
points(locations[,2], predict(fitted)[,"loc"], col = "blue", pch = 5)
new.data <- cbind(lon = seq(0, 10, length = 100), lat = seq(0, 10, length = 100))
lines(new.data[,1], predict(fitted, new.data)[,"loc"], col = "blue")
legend("topleft", c("true values", "predict. values", "true curve", "predict. curve"),
col = c("red", "blue", "red", "blue"), pch = c(1, 5, NA, NA), inset = 0.05,
lty = c(0, 0, 1, 1), ncol = 2)
plot(locations[,2], param.scale, col = 2, ylim = c(7, 14),
ylab = "scale parameter", xlab = "latitude")
plot(fun, from = 0, to = 10, add = TRUE, col = 2)
points(locations[,2], predict(fitted)[,"scale"], col = "blue", pch = 5)
lines(new.data[,1], predict(fitted, new.data)[,"scale"], col = "blue")
legend("topleft", c("true values", "predict. values", "true curve", "predict. curve"),
col = c("red", "blue", "red", "blue"), pch = c(1, 5, NA, NA), inset = 0.05,
lty = c(0, 0, 1, 1), ncol = 2)
plot(locations[,1], param.shape, col = 2,
ylab = "shape parameter", xlab = "longitude")
plot(fun2, from = 0, to = 10, add = TRUE, col = 2)
points(locations[,1], predict(fitted)[,"shape"], col = "blue", pch = 5)
lines(new.data[,1], predict(fitted, new.data)[,"shape"], col = "blue")
legend("topleft", c("true values", "predict. values", "true curve", "predict. curve"),
col = c("red", "blue", "red", "blue"), pch = c(1, 5, NA, NA), inset = 0.05,
lty = c(0, 0, 1, 1), ncol = 2)
par(op)
## End(Not run)
MLE for a spatial GEV model
Description
This function derives the MLE of a spatial GEV model.
Usage
fitspatgev(data, covariables, loc.form, scale.form, shape.form,
temp.cov = NULL, temp.form.loc = NULL, temp.form.scale = NULL,
temp.form.shape = NULL, ..., start, control = list(maxit = 10000),
method = "Nelder", warn = TRUE, corr = FALSE)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
covariables |
Matrix with named columns giving required covariates for the GEV parameter models. |
loc.form , scale.form , shape.form |
R formulas defining the spatial models for the GEV parameters. See section Details. |
temp.cov |
Matrix with names columns giving additional *temporal*
covariates for the GEV parameters. If |
temp.form.loc , temp.form.scale , temp.form.shape |
R formulas defining the temporal trends for the GEV parameters. May be missing. See section Details. |
start |
A named list giving the initial values for the
parameters over which the pairwise likelihood is to be minimized. If
|
... |
Several arguments to be passed to the
|
control |
The control argument to be passed to the
|
method |
The method used for the numerical optimisation
procedure. Must be one of |
warn |
Logical. If |
corr |
Logical. If |
Details
A kind of "spatial" GEV model can be defined by using response surfaces for the GEV parameters. For instance, the GEV location parameters are defined through the following equation:
\mu = X_\mu \beta_\mu
where X_\mu
is the design matrix and
\beta_\mu
is the vector parameter to be
estimated. The GEV scale and shape parameters are defined accordingly
to the above equation.
The log-likelihood for the GEV spatial model is consequently defined as follows:
\ell(\beta) = \sum_{i=1}^{n.site} \sum_{j=1}^{n.obs} \log
f(y_{i,j}; \theta_i)
where \theta_i
is the vector of the GEV parameters for
the i
-th site.
Most often, there will be some dependence between stations. However, it can be seen from the log-likelihood definition that we supposed that the stations are mutually independent. Consequently, to get reliable standard error estimates, these standard errors are estimated with their sandwich estimates.
There are two different kind of covariates : "spatial" and "temporal".
A "spatial" covariate may have different values accross station but
does not depend on time. For example the coordinates of the stations
are obviously "spatial". These "spatial" covariates should be used
with the marg.cov
and loc.form, scale.form, shape.form
.
A "temporal" covariates may have different values accross time but
does not depend on space. For example the years where the annual
maxima were recorded is "temporal". These "temporal" covariates should
be used with the temp.cov
and temp.form.loc,
temp.form.scale, temp.form.shape
.
As a consequence note that marg.cov
must have K rows (K being
the number of sites) while temp.cov
must have n rows (n being
the number of observations).
Value
An object of class spatgev
. Namely, this is a list with the
following arguments:
fitted.values |
The parameter estimates. |
param |
All the parameters e.g. parameter estimates and fixed parameters. |
std.err |
The standard errors. |
var.cov |
The asymptotic MLE variance covariance matrix. |
counts , message , convergence |
Some information about the optimization procedure. |
logLik , deviance |
The log-likelihood and deviance values. |
loc.form , scale.form , shape.form |
The formulas defining the spatial models for the GEV parameters. |
covariables |
The covariables used for the spatial models. |
ihessian |
The inverse of the Hessian matrix of the negative log-likelihood. |
jacobian |
The variance covariance matrix of the score. |
Author(s)
Mathieu Ribatet
Examples
## 1- Simulate a max-stable random field
n.site <- 35
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
data <- rmaxstab(50, locations, cov.mod = "whitmat", nugget = 0, range =
3, smooth = 0.5)
## 2- Transformation to non unit Frechet margins
param.loc <- -10 + 2 * locations[,2]
param.scale <- 5 + 2 * locations[,1]
param.shape <- rep(0.2, n.site)
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i],
param.shape[i])
## 3- Fit a ''spatial GEV'' mdoel to data with the following models for
## the GEV parameters
form.loc <- loc ~ lat
form.scale <- scale ~ lon
form.shape <- shape ~ 1
fitspatgev(data, locations, form.loc, form.scale, form.shape)
Computes the F-madogram
Description
Computes the F-madogram for max-stable processes.
Usage
fmadogram(data, coord, fitted, n.bins, which = c("mado", "ext"), xlab,
ylab, col = c(1, 2), angles = NULL, marge = "emp", add = FALSE, xlim =
c(0, max(dist)), ...)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
fitted |
An object of class maxstab - usually the output of the
|
n.bins |
The number of bins to be used. If missing, pairwise F-madogram estimates will be computed. |
which |
A character vector of maximum size 2. It specifies if the madogram and/or the extremal coefficient functions have to be plotted. |
xlab , ylab |
The x-axis and y-axis labels. May be missing. Note
that |
col |
The colors used for the points and optionnaly the fitted curve. |
angles |
A numeric vector. A partition of the interval
|
marge |
Character string. If 'emp', the probabilities of non exceedances are estimated using the empirical CDF. If 'mle' (default), maximum likelihood estimates are used. |
add |
Logical. If |
xlim |
A numeric vector of length 2 specifying the x coordinate range. |
... |
Additional options to be passed to the |
Details
Let Z(x)
be a stationary process. The F-madogram is
defined as follows:
\nu(h) = \frac{1}{2}\mbox{E}\left[|F(Z(x+h)) - F(Z(x))|
\right]
The extremal coefficient \theta(h)
satisfies:
\theta(h) = \frac{1 + 2 \nu(h)}{1 - 2 \nu(h)}
Value
A graphic and (invisibly) a matrix with the lag distances, the F-madogram and extremal coefficient estimates.
Author(s)
Mathieu Ribatet
References
Cooley, D., Naveau, P. and Poncet, P. (2006) Variograms for spatial max-stable random fields. Dependence in Probability and Statistics, 373–390.
See Also
Examples
n.site <- 15
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(40, locations, cov.mod = "whitmat", nugget = 0, range = 1,
smooth = 2)
##Compute the F-madogram
fmadogram(data, locations)
##Compare the F-madogram with a fitted max-stable process
fitted <- fitmaxstab(data, locations, "whitmat", nugget = 0)
fmadogram(fitted = fitted, which = "ext")
Estimates the penalty coefficient from the generalized cross-validation criterion
Description
Estimates the penalty coefficient from the generalized cross-validation criterion.
Usage
gcv(y, x, knots, degree, plot = TRUE, n.points = 150, ...)
Arguments
y |
The response vector. |
x |
A vector/matrix giving the values of the predictor
variable(s). If |
knots |
A vector givint the coordinates of the knots. |
degree |
The degree of the penalized smoothing spline. |
plot |
Logical. If |
n.points |
A numeric giving the number of CV computations needed to produce the plot. |
... |
Options to be passed to the |
Details
For every linear smoother e.g. \hat{y} = S_\lambda y
, the cross-validation criterion consists in minimizing
the following quantity:
GCV(\lambda) = \frac{n ||y - \hat{y}||^2}{(n -
tr(S_\lambda))^2}
where \lambda
is the penalty coefficient, n
the
number of observations and tr(S_\lambda)
is the
trace of the matrix S_\lambda
.
Value
A list with components 'penalty', 'gcv' and 'nlm.code' which give the
location of the minimum, the value of the cross-validation
criterion at that point and the code returned by the link{nlm}
function - useful to assess for convergence issues.
Author(s)
Mathieu Ribatet
References
Ruppert, D. Wand, M.P. and Carrol, R.J. (2003) Semiparametric Regression Cambridge Series in Statistical and Probabilistic Mathematics.
See Also
Examples
n <- 200
x <- runif(n)
fun <- function(x) sin(3 * pi * x)
y <- fun(x) + rnorm(n, 0, sqrt(0.4))
knots <- quantile(x, prob = 1:(n/4) / (n/4 + 1))
gcv(y, x, knots = knots, degree = 3)
Transforms GEV data to unit Frechet ones and vice versa
Description
Transforms GEV data to unit Frechet ones and vice versa
Usage
gev2frech(x, loc, scale, shape, emp = FALSE)
frech2gev(x, loc, scale, shape)
Arguments
x |
The data to be transformed to unit Frechet or ordinary GEV margins |
loc , scale , shape |
The location, scale and shape parameters of the GEV. |
emp |
Logical. If |
Details
If Y is a random variable with a GEV distribution with location
\mu
, scale \sigma
and shape
\xi
. Then,
Z = \left[1 + \xi \frac{Y - \mu}{\sigma} \right]_+^{1/\xi}
is unit Frechet distributed - where x_+ = \max(0, x)
.
If Z is a unit Frechet random variable. Then,
Y = \mu + \sigma \frac{Z_+^{\xi} - 1}{\xi}
is unit GEV distributed with location, scale and shape parameters
equal to \mu
, \sigma
and \xi
respectively.
Value
Returns a numeric vector with the same length of x
Author(s)
Mathieu Ribatet
Examples
x <- c(2.2975896, 1.6448808, 1.3323833, -0.4464904, 2.2737603,
-0.2581876, 9.5184398, -0.5899699, 0.4974283, -0.8152157)
y <- gev2frech(x, 1, 2, .2)
y
frech2gev(y, 1, 2, .2)
Simple kriging interpolation
Description
This function interpolates a zero mean Gaussian random field using the simple kriging predictor.
Usage
kriging(data, data.coord, krig.coord, cov.mod = "whitmat", sill, range,
smooth, smooth2 = NULL, grid = FALSE, only.weights = FALSE)
Arguments
data |
A numeric vector or matrix. If |
data.coord |
A numeric vector or matrix specifying the
coordinates of the observed data. If |
krig.coord |
A numeric vector or matrix specifying the
coordinates where the kriging predictor has to be computed. If
|
cov.mod |
A character string specifying the covariance function family. Must be one of "whitmat", "powexp", "cauchy", "bessel" or "caugen" for the Whittle-Matern, the powered exponential, the Cauchy, the Bessel or the generalized Cauchy covariance families. |
sill , range , smooth , smooth2 |
Numerics specifiying the sill, range, smooth and, if any, the second smooth parameters of the covariance function. |
grid |
Logical. Does |
only.weights |
Logical. Should only the kriging weights be
computed? If |
Value
A list with components
coord |
The coordinates where the kriging predictor has been computed; |
krig.est |
The kriging predictor estimates; |
grid |
Does |
weights |
A matrix giving the kriging weights: each column corresponds to one prediction location. |
Author(s)
Mathieu Ribatet
References
Chiles, J.-P. and Delfiner, P. (1999) Geostatistics, Modeling Spatial Uncertainty Wiley Series in Probability and Statistics.
See Also
condrgp
, rgp
, covariance
.
Examples
## Kriging from a single realisation
n.site <- 50
n.pred <- 512
x.obs <- runif(n.site, -100, 100)
x.pred <- seq(-100, 100, length = n.pred)
data <- rgp(1, x.obs, "whitmat", sill = 1, range = 10, smooth = 0.75)
krig <- kriging(data, x.obs, x.pred, "whitmat", sill = 1, range = 10,
smooth = 0.75)
plot(krig$coord, krig$krig.est, type = "l", xlab = "x", ylab =
expression(hat(Y)(x)))
points(x.obs, data, col = 2, pch = 21, bg = 2)
## Kriging from several realisations
n.real <- 3
data <- rgp(n.real, x.obs, "whitmat", sill = 1, range = 10, smooth = 0.75)
krig <- kriging(data, x.obs, x.pred, "whitmat", sill = 1, range = 10,
smooth = 0.75)
matplot(krig$coord, t(krig$krig.est), type = "l", xlab = "x", ylab =
expression(hat(Y)(x)), lty = 1)
matpoints(x.obs, t(data), pch = 21, col = 1:n.real, bg = 1:n.real)
title("Three kriging predictors in one shot")
## Two dimensional kriging on a grid
x.obs <- matrix(runif(2 * n.site, -100, 100), ncol = 2)
x <- y <- seq(-100, 100, length = 100)
x.pred <- cbind(x, y)
data <- rgp(1, x.obs, "whitmat", sill = 1, range = 10, smooth = 0.75)
krig <- kriging(data, x.obs, x.pred, "whitmat", sill = 1, range = 10,
smooth = 0.75, grid = TRUE)
z.lim <- range(c(data, krig$krig.est))
breaks <- seq(z.lim[1], z.lim[2], length = 65)
col <- heat.colors(64)
idx <- as.numeric(cut(data, breaks))
image(x, y, krig$krig.est, col = col, breaks = breaks)
points(x.obs, bg = col[idx], pch = 21)
## Note how the background colors of the above points matches the ones
## returned by the image function
Bayesian hierarchical models for spatial extremes
Description
This function generates a Markov chain from a Bayesian hierarchical model for block maxima assuming conditional independence.
Usage
latent(data, coord, cov.mod = "powexp", loc.form, scale.form,
shape.form, marg.cov = NULL, hyper, prop, start, n = 5000, thin = 1,
burn.in = 0, use.log.link = FALSE)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
cov.mod |
A character string corresponding to the covariance model for the Gaussian latent processes. Must be one of "gauss" for the Smith's model; "whitmat", "cauchy", "powexp" or "bessel" or for the Whittle-Matern, the Cauchy, the Powered Exponential and the Bessel correlation families. |
loc.form , scale.form , shape.form |
R formulas defining the spatial linear model for the mean of the latent processes. |
marg.cov |
Matrix with named columns giving additional covariates
for the latent processes means. If |
hyper |
A named list specifying the hyper-parameters — see Details. |
prop |
A named list specifying the jump sizes when a Metropolis–Hastings move is needed — see Details. |
start |
A named list specifying the starting values — see Details. |
n |
The effective length of the simulated Markov chain i.e., once the burnin period has been discarded and after thinning. |
thin |
An integer specifying the thinning length. The default is 1, i.e., no thinning. |
burn.in |
An integer specifying the burnin period. The default is 0, i.e., no burnin. |
use.log.link |
An integer. Should a log-link function should be use for the GEV scale parameters, i.e., assuming that the GEV scale parameters a drawn from a log-normal process rather than a gaussian process. |
Details
This function generates a Markov chain from the following model. For
each x \in R^d
, suppose that
Y(x)
is GEV distributed whose parameters \{\mu(x),
\sigma(x), \xi(x)\}
vary smoothly for
x \in R^d
according to a stochastic process
S(x)
. We assume that the processes for each GEV
parameters are mutually independent Gaussian processes. For instance,
we take for the location parameter \mu(x)
\mu(x) = f_{\mu(x)}(x;\beta_\mu) + S_\mu(x;\alpha_{\mu},
\lambda_\mu, \kappa_\mu)
where f_\mu
is a deterministic function depending on
regression parameters \beta_\mu
, and
S_\mu
is a zero mean, stationary Gaussian process with a
prescribed covariance function with sill \alpha_\mu
,
range \lambda_\mu
and shape parameters
\kappa_\mu
. Similar formulations for the scale
\sigma(x)
and the shape \xi(x)
parameters
are used. Then conditional on the values of the three Gaussian
processes at the sites (x_1, \ldots, x_K)
,
the maxima are assumed to follow GEV distributions
Y_i(x_j) \mid \{\mu(x_j), \sigma(x_j), \xi(x_j)\} \sim
\mbox{GEV}\{\mu(x_j), \sigma(x_j), \xi(x_j)\},
independently for each location (x_1, \ldots, x_K)
.
A joint prior density must be defined for the sills, ranges, shapes
parameters of the covariance functions as well as for the regression
parameters \beta_\mu
,\beta_\sigma
and \beta_\xi
. Conjugate priors are used whenever
possible, taking independent inverse Gamma and multivariate normal
distributions for the sills and the regression parameters. No
conjugate prior exist for \lambda
and
\kappa
, for wich a Gamma distribution is assumed.
Consequently hyper
is a named list with named components
- sills
A list with three components named 'loc', 'scale' and 'shape' each of these is a 2-length vector specifying the shape and the scale of the inverse Gamma prior distribution for the sill parameter of the covariance functions;
- ranges
A list with three components named 'loc', 'scale' and 'shape' each of these is a 2-length vector specifying the shape and the scale of the Gamma prior distribution for the range parameter of the covariance functions.
- smooths
A list with three components named 'loc', 'scale' and 'shape' each of these is a 2-length vector specifying the shape and the scale of the Gamma prior distribution for the shape parameter of the covariance functions;
- betaMean
A list with three components named 'loc', 'scale' and 'shape' each of these is a vector specifying the mean vector of the multivariate normal prior distribution for the regression parameters;
- betaIcov
A list with three components named 'loc', 'scale' and 'shape' each of these is a matrix specifying the inverse of the covariance matrix of the multivariate normal prior distribution for the regression parameters.
As no conjugate prior exists for the GEV parameters and the range and
shape parameters of the covariance functions, Metropolis–Hastings
steps are needed. The proposals \theta_{prop}
are
drawn from a proposal density q(\cdot \mid \theta_{cur},
s)
where \theta_{cur}
is
the current state of the parameter and s
is a parameter of
the proposal density to be defined. These proposals are driven by
prop
which is a list with three named components
- gev
A vector of length 3 specifying the standard deviations of the proposal distributions. These are taken to be normal distribution for the location and shape GEV parameters and a log-normal distribution for the scale GEV parameters;
- ranges
A vector of length 3 specifying the jump sizes for the range parameters of the covariance functions —
q(\cdot | \theta_{cur}, s)
is the log-normal density with mean\theta_{cur}
and standard deviations
both on the log-scale;- smooths
A vector of length 3 specifying the jump sizes for the shape parameters of the covariance functions —
q(\cdot | \theta_{cur}, s)
is the log-normal density with mean\theta_{cur}
and standard deviations
both on the log-scale.
If one want to held fixed a parameter this can be done by setting a null jump size then the parameter will be held fixed to its starting value.
Finally start
must be a named list with 4 named components
- sills
A vector of length 3 specifying the starting values for the sill of the covariance functions;
- ranges
A vector of length 3 specifying the starting values for the range of the covariance functions;
- smooths
A vector of length 3 specifying the starting values for the shape of the covariance functions;
- beta
A named list with 3 components 'loc', 'scale' and 'shape' each of these is a numeric vector specifying the starting values for the regression coefficients.
Value
A list
Warning
This function can be time consuming and makes an intensive use of BLAS routines so it is (much!) faster if you have an optimized BLAS.
The starting values will never be stored in the generated Markov chain
even when burn.in=0
.
Note
If you want to analyze the convergence ans mixing properties of the Markov chain, it is recommended to use the library coda.
Author(s)
Mathieu Ribatet
References
Banerjee, S., Carlin, B. P., and Gelfand, A. E. (2004). Hierarchical Modeling and Analysis for Spatial Data. Chapman & Hall/CRC, New York.
Casson, E. and Coles, S. (1999) Spatial regression models for extremes. Extremes 1,449–468.
Cooley, D., Nychka, D. and Naveau, P. (2007) Bayesian spatial modelling of extreme precipitation return levels Journal of the American Statistical Association 102:479, 824–840.
Davison, A.C., Padoan, S.A. and Ribatet, M. Statistical Modelling of Spatial Extremes. Submitted.
Examples
## Not run:
## Generate realizations from the model
n.site <- 30
n.obs <- 50
coord <- cbind(lon = runif(n.site, -10, 10), lat = runif(n.site, -10 , 10))
gp.loc <- rgp(1, coord, "powexp", sill = 4, range = 20, smooth = 1)
gp.scale <- rgp(1, coord, "powexp", sill = 0.4, range = 5, smooth = 1)
gp.shape <- rgp(1, coord, "powexp", sill = 0.01, range = 10, smooth = 1)
locs <- 26 + 0.5 * coord[,"lon"] + gp.loc
scales <- 10 + 0.2 * coord[,"lat"] + gp.scale
shapes <- 0.15 + gp.shape
data <- matrix(NA, n.obs, n.site)
for (i in 1:n.site)
data[,i] <- rgev(n.obs, locs[i], scales[i], shapes[i])
loc.form <- y ~ lon
scale.form <- y ~ lat
shape.form <- y ~ 1
hyper <- list()
hyper$sills <- list(loc = c(1,8), scale = c(1,1), shape = c(1,0.02))
hyper$ranges <- list(loc = c(2,20), scale = c(1,5), shape = c(1, 10))
hyper$smooths <- list(loc = c(1,1/3), scale = c(1,1/3), shape = c(1, 1/3))
hyper$betaMeans <- list(loc = rep(0, 2), scale = c(9, 0), shape = 0)
hyper$betaIcov <- list(loc = solve(diag(c(400, 100))),
scale = solve(diag(c(400, 100))),
shape = solve(diag(c(10), 1, 1)))
## We will use an exponential covariance function so the jump sizes for
## the shape parameter of the covariance function are null.
prop <- list(gev = c(1.2, 0.08, 0.08), ranges = c(0.7, 0.8, 0.7), smooths = c(0,0,0))
start <- list(sills = c(4, .36, 0.009), ranges = c(24, 17, 16), smooths
= c(1, 1, 1), beta = list(loc = c(26, 0.5), scale = c(10, 0.2),
shape = c(0.15)))
mc <- latent(data, coord, loc.form = loc.form, scale.form = scale.form,
shape.form = shape.form, hyper = hyper, prop = prop, start = start,
n = 10000, burn.in = 5000, thin = 15)
mc
## End(Not run)
Computes the lambda-madogram
Description
Computes the lambda-madogram for max-stable processes.
Usage
lmadogram(data, coord, n.bins, xlab, ylab, zlab, n.lambda = 11, marge =
"emp", col = terrain.colors(50, alpha = 0.5), theta = 90, phi = 20,
border = NA, ...)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
n.bins |
The number of bins to be used. If missing, pairwise lambda-madogram estimates will be computed. |
xlab , ylab , zlab |
The x-axis, y-axis and z-axis labels. May be missing. |
n.lambda |
Integer giving the number of lambda values. |
marge |
Character string. If 'emp', probabilities of non exceedances are estimated using the empirical CDF. If 'mle' (default), maximum likelihood estimates are used. |
col |
The colors used to emphasize the gradient of the lambda-madogram. |
theta , phi , border |
Options to be passed to the
|
... |
Additional options to be passed to the |
Details
Let Z(x)
be a stationary process. The
\lambda
-madogram is defined as follows:
\nu_{\lambda}(h) = \frac{1}{2}\mbox{E}\left[|F^\lambda(Z(x+h)) -
F^{1-\lambda}(Z(x))| \right]
Value
A graphic and (invisibly) a matrix with the lag distances, the
\lambda
-madogram estimate.
Author(s)
Mathieu Ribatet
References
Naveau, P., Guillou, A., Cooley, D. and Diebolt, J. (2009) Modelling Pairwise Dependence of Maxima in Space. To appear in Biometrika.
See Also
Examples
n.site <- 50
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(40, locations, cov.mod = "whitmat", nugget = 0, range = 1,
smooth = 2)
##Compute the lambda-madogram
lmadogram(data, locations, n.bins = 80)
Extracts Log-Likelihood
Description
Extract the pairwise log-likelihood for objects of class “maxstab” and “copula”
Usage
## S3 method for class 'maxstab'
logLik(object, ...)
## S3 method for class 'copula'
logLik(object, ...)
Arguments
object |
An object of class “maxstab” or “copula”. Most often
this will be the output of the |
... |
Other arguments to be passed to the |
Value
Standard logLik
object (see logLik
) except that
it might actually correspond to the pairwise log-likelihood, e.g., for
the class “maxstab”!
Author(s)
Mathieu Ribatet
See Also
Examples
##Define the coordinates of each location
n.site <- 30
locations <- matrix(5 + runif(2*n.site, 0, 10), ncol = 2)
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(30, locations, cov.mod = "whitmat", nugget = 0, range = 3,
smooth = 0.5)
fit <- fitmaxstab(data, locations, "whitmat")
logLik(fit)
Estimates the spatial dependence parameter of a max-stable process by minimizing least squares.
Description
Estimates the spatial dependence parameter of a max-stable process by minimizing least squares.
Usage
lsmaxstab(data, coord, cov.mod = "gauss", marge = "emp", control =
list(), iso = FALSE, ..., weighted = TRUE)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
cov.mod |
Character string specifying the max-stable process considered. Must be one of "gauss" (Smith's model), "whitmat", "cauchy", "powexp", "bessel", "caugen" for the Schlather model with the corresponding correlation function. |
marge |
Character string specifying how margins are transformed
to unit Frechet. Must be one of "emp", "frech" or "mle" - see
function |
control |
The control arguments to be passed to the
|
iso |
Logical. If |
... |
Optional arguments. |
weighted |
Logical. Should weighted least squares be used? See Details. |
Details
The fitting procedure is based on weighted least squares. More precisely, the fitting criteria is to minimize:
\sum_{i,j} \left(\frac{\tilde{\theta}_{i,j} -
\hat{\theta}_{i,j}}{s_{i,j}}\right)^2
where \tilde{\theta}_{i,j}
is a non
parametric estimate of the extremal coefficient related to location
i
and j
, \hat{\theta}_{i,j}
is
the fitted extremal coefficient derived from the maxstable model
considered and s_{i,j}
are the standard errors related
to the estimates \tilde{\theta}_{i,j}
.
Value
An object of class maxstab.
Author(s)
Mathieu Ribatet
References
Smith, R. L. (1990) Max-stable processes and spatial extremes. Unpublished manuscript.
See Also
fitcovariance
, fitmaxstab
,
fitextcoeff
Examples
n.site <- 50
n.obs <- 100
locations <- matrix(runif(2*n.site, 0, 40), ncol = 2)
colnames(locations) <- c("lon", "lat")
## Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(50, locations, cov.mod = "gauss", cov11 = 200, cov12 =
0, cov22 = 200)
lsmaxstab(data, locations, "gauss")
##Force an isotropic model and do not use weights
lsmaxstab(data, locations, "gauss", iso = TRUE, weighted = FALSE)
Computes madograms
Description
Computes the madogram for max-stable processes.
Usage
madogram(data, coord, fitted, n.bins, gev.param = c(0, 1, 0), which =
c("mado", "ext"), xlab, ylab, col = c(1, 2), angles = NULL, marge =
"emp", add = FALSE, xlim = c(0, max(dist)), ...)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
fitted |
An object of class maxstab - usually the output of the
|
n.bins |
The number of bins to be used. If missing, pairwise madogram estimates will be computed. |
gev.param |
Numeric vector of length 3 specifying the location, scale and shape parameters for the GEV. |
which |
A character vector of maximum size 2. It specifies if the madogram and/or the extremal coefficient functions have to be plotted. |
xlab , ylab |
The x-axis and y-axis labels. May be missing. Note
that |
col |
The colors used for the points and optionnaly for the fitted curve. |
angles |
A numeric vector. A partition of the interval
|
marge |
Character string. If 'emp', the observation are first transformed to the unit Frechet scale by using the empirical CDF. If 'mle' (default), maximum likelihood estimates are used. |
add |
Logical. If |
xlim |
A numeric vector of length 2 specifying the x coordinate range. |
... |
Additional options to be passed to the |
Details
Let Z(x)
be a stationary process. The madogram is defined
as follows:
\nu(h) = \frac{1}{2}\mbox{E}\left[|Z(x+h) - Z(x)|
\right]
If now Z(x)
is a stationary max-stable random field with
GEV marginals. Provided the GEV shape parameter \xi
is such
that \xi < 1
. The extremal coefficient
\theta(h)
satisfies:
\theta(h) =
\left\{
\begin{array}{ll}
u_\beta \left(\mu + \frac{\nu(h)}{\Gamma(1 - \xi)} \right), &\xi
\neq 0\\
\exp\left(\frac{\nu(h)}{\sigma}\right), &\xi = 0
\end{array}
\right.
where \Gamma
is the gamma function and
u_\beta
is defined as follows:
u_\beta(u) = \left(1 + \xi \frac{u - \mu}{\sigma}
\right)_+^{1/\xi}
and
\beta = (\mu, \sigma, \xi)
, i.e,
the vector of the GEV parameters.
Value
A graphic and (invisibly) a matrix with the lag distances, the madogram and extremal coefficient estimates.
Author(s)
Mathieu Ribatet
References
Cooley, D., Naveau, P. and Poncet, P. (2006) Variograms for spatial max-stable random fields. Dependence in Probability and Statistics, 373–390.
See Also
Examples
n.site <- 15
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(40, locations, cov.mod = "whitmat", nugget = 0, range = 1,
smooth = 2)
##Compute the madogram
madogram(data, locations)
##Compare the madogram with a fitted max-stable model
fitted <- fitmaxstab(data, locations, "whitmat", nugget = 0)
madogram(fitted = fitted, which = "ext")
Produces a 2D map from a fitted max-stable process
Description
Produces a 2D map from a fitted max-stable process.
Usage
map(fitted, x, y, covariates = NULL, param = "quant", ret.per = 100, col
= terrain.colors(64), plot.contour = TRUE, ...)
Arguments
fitted |
An object of class |
x , y |
Numeric vector that gives the coordinates of the grid. |
covariates |
An array specifying the covariates at each grid
point defined by |
param |
A character string. Must be one of "loc", "scale", "shape" or "quant" for a map of the location, scale, shape parameters or for a map of a specified quantile. |
ret.per |
A numeric giving the return period for which the
quantile map is plotted. It is only required if |
col |
A list of colors such as that generated by 'rainbow', 'heat.colors', 'topo.colors', 'terrain.colors' or similar functions. |
plot.contour |
Logical. If |
... |
Several arguments to be passed to the |
Value
A plot. Additionally, a list with the details for plotting the map is returned invisibly.
Author(s)
Mathieu Ribatet
See Also
condmap
, filled.contour
,
heatmap
, heat.colors
,
topo.colors
, terrain.colors
,
rainbow
Examples
##We run an artifical example using the volcano data set as a study
##region
dim <- dim(volcano)
n.x <- dim[1]
n.y <- dim[2]
x <- 10 * 1:n.x
y <- 10 * 1:n.y
n.site <- 15
idx.x <- sample(n.x, n.site)
idx.y <- sample(n.y, n.site)
locations <- cbind(lon = x[idx.x], lat = y[idx.y])
alt <- diag(volcano[idx.x, idx.y])
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(40, locations, cov.mod = "whitmat", nugget = 0, range = 750,
smooth = 1)
##Now define the spatial model for the GEV parameters
param.loc <- -10 - 0.04 * locations[,1] + alt / 5
param.scale <- 5 - locations[,2] / 30 + alt / 4
param.shape <- rep(.2, n.site)
##Transform the unit Frechet margins to GEV
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i],
param.shape[i])
##Define a model for the GEV margins to be fitted
##shape ~ 1 stands for the GEV shape parameter is constant
##over the region
loc.form <- loc ~ lon + alt
scale.form <- scale ~ lat + alt
shape.form <- shape ~ 1
## 1- Fit a max-stable process
schlather <- fitmaxstab(data, locations, "whitmat", loc.form, scale.form,
shape.form, marg.cov = cbind(alt = alt), nugget
= 0)
## 2- Produce a map of the pointwise 50-year return level
##Here we have only one covariate i.e. alt
n.cov <- 1
covariates <- array(volcano, dim = c(n.x, n.y, n.cov), dimnames =
list(NULL, NULL, "alt"))
par(mfrow = c(1,2))
image(x, y, volcano, col = terrain.colors(64), main = "Elevation map")
map(schlather, x, y, covariates, ret.per = 50, plot.contour = FALSE,
main = "50-year return level")
Two dimensional map from a Bayesian hierarchical model
Description
This function plots 2D maps from a Markov chain.
Usage
map.latent(fitted, x, y, covariates = NULL, param = "quant", ret.per =
100, col = terrain.colors(64), plot.contour = TRUE, fun = mean, level =
0.95, show.data = TRUE, control = list(nlines = 500), ...)
Arguments
fitted |
An object of class "latent". Typically this will be the
output of |
x , y |
Numeric vector specifying the coordinates of the grid points. |
covariates |
An array specifying the covariates at each grid
point defined by |
param |
A character string. Must be one of "loc", "scale", "shape" or "quant" for a map of the location, scale, shape parameters or for a map of a specified quantile. |
ret.per |
A numeric giving the return period for which the
quantile map is plotted. It is only required if |
col |
A list of colors such as that generated by 'rainbow', 'heat.colors', 'topo.colors', 'terrain.colors' or similar functions. |
plot.contour |
Logical. If |
fun |
A character string specifying the function to be used to get posterior point estimates. The default is to take posterior means. |
level |
A numeric specifying the significance level for the pointwise credible intervals. |
show.data |
Logical. Should the locations where have observed the process have to be plotted? |
control |
A list with named components specifying options to be
passed to |
... |
Several arguments to be passed to the |
Value
A plot and a invisible list containing all the data required to do the plot.
Author(s)
Mathieu Ribatet
See Also
Examples
## Not run:
## Generate realizations from the model
n.site <- 30
n.obs <- 50
coord <- cbind(lon = runif(n.site, -10, 10), lat = runif(n.site, -10 , 10))
gp.loc <- rgp(1, coord, "powexp", sill = 4, range = 20, smooth = 1)
gp.scale <- rgp(1, coord, "powexp", sill = 0.4, range = 5, smooth = 1)
gp.shape <- rgp(1, coord, "powexp", sill = 0.01, range = 10, smooth = 1)
locs <- 26 + 0.5 * coord[,"lon"] + gp.loc
scales <- 10 + 0.2 * coord[,"lat"] + gp.scale
shapes <- 0.15 + gp.shape
data <- matrix(NA, n.obs, n.site)
for (i in 1:n.site)
data[,i] <- rgev(n.obs, locs[i], scales[i], shapes[i])
loc.form <- y ~ lon
scale.form <- y ~ lat
shape.form <- y ~ 1
hyper <- list()
hyper$sills <- list(loc = c(1,8), scale = c(1,1), shape = c(1,0.02))
hyper$ranges <- list(loc = c(2,20), scale = c(1,5), shape = c(1, 10))
hyper$smooths <- list(loc = c(1,1/3), scale = c(1,1/3), shape = c(1, 1/3))
hyper$betaMeans <- list(loc = rep(0, 2), scale = c(9, 0), shape = 0)
hyper$betaIcov <- list(loc = solve(diag(c(400, 100))),
scale = solve(diag(c(400, 100))),
shape = solve(diag(c(10), 1, 1)))
## We will use an exponential covariance function so the jump sizes for
## the shape parameter of the covariance function are null.
prop <- list(gev = c(2.5, 1.5, 0.2), ranges = c(0.7, 0.75, 0.9), smooths = c(0,0,0))
start <- list(sills = c(4, .36, 0.009), ranges = c(24, 17, 16), smooths
= c(1, 1, 1), beta = list(loc = c(26, 0.5), scale = c(10, 0.2),
shape = c(0.15)))
## Generate a Markov chain
mc <- latent(data, coord, loc.form = loc.form, scale.form = scale.form,
shape.form = shape.form, hyper = hyper, prop = prop, start = start,
n = 100)
x.grid <- y.grid <- seq(-10, 10, length = 50)
map.latent(mc, x.grid, y.grid, param = "shape")
## End(Not run)
Fits univariate extreme value distributions to data
Description
These functions fit the generalised extreme value and generalised Pareto distribution to data using maximum likelihood.
Usage
gevmle(x, ..., method = "Nelder")
gpdmle(x, threshold, ..., method = "Nelder")
Arguments
x |
Numeric vector of observations |
... |
Optional arguments to be passed to the
|
threshold |
Numeric. The threshold value. |
method |
The numerical optimisation method to be used. |
Details
These two functions are “extremely light” functions to fit the
GEV/GPD. These functions are mainly useful to compute starting values
for the Schlather and Smith model - see fitmaxstab
.
If more refined (univariate) analysis have to be performed, users should use more specialised packages - e.g. POT, evd, ismev, ....
Value
A vector for the estimated parameters of the GEV/GPD.
Author(s)
Mathieu Ribatet
Examples
## 1 - GEV fit
x <- rep(NA, 100)
for (i in 1:100)
x[i] <- max(rnorm(365))
gevmle(x)
## 2- GPD fit
x <- rnorm(10000)
##we need to fix a threshold
u <- quantile(x, 0.99)
gpdmle(x, u)
Define a model for the spatial behaviour of the GEV parameters
Description
This function defines the model for the spatial behaviour of the GEV parameter.
Usage
modeldef(data, formula)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
formula |
A R formula. See details for further details. |
Value
need to be documented
Author(s)
Mathieu Ribatet
See Also
Examples
## 1- A design matrix from a classical linear model
n.site <- 5
coord <- matrix(rnorm(2*n.site, sd = sqrt(.2)), ncol = 2)
colnames(coord) <- c("lon", "lat")
loc.form <- loc ~ lat + I(lon^2)
modeldef(coord, loc.form)
## 2- A design and penalization matrix from a penalized smoothin spline
x <- sort(runif(10, -2, 10))
n.knots <- 3
knots <- quantile(x, prob = 1:n.knots / (n.knots + 2))
modeldef(x, y ~ rb(x, knots = knots, degree = 3, penalty = 1))
Model checking of a fitted copula based model.
Description
This function produces several plots to assess the goodness of fit of a fitted copula based model for spatial extremes.
Usage
## S3 method for class 'copula'
plot(x, ..., sites)
Arguments
x |
An object of class |
... |
Here for compatibility reasons but not yet implemented. |
sites |
A vector of integer of length 4 specifying the locations to be used for the model checking. If missing, locations will be choosen randomly. |
Details
The diagonal plots are return level plots. The lower ones are qq-plots
(on the Gumbel scale) between observed pairwise maxima for each block,
e.g. year, and the ones obtained by simulations from the fitted
model. The upper plot compares the fitted extremal coefficient
functions to semi-empirical estimates from the F-madogram - see
fmadogram
. The two remaining plots are the stations
locations and a qq-plot of blockwise maxima where the block size is 4.
Value
Several diagnostic plots.
Author(s)
Mathieu Ribatet
Examples
## Not run:
n.site <- 20
n.obs <- 50
coord <- matrix(runif(2 * n.site, 0, 10), ncol = 2)
colnames(coord) <- c("lon", "lat")
data <- rmaxstab(n.obs, coord, "powexp", nugget = 0, range = 3, smooth =
1)
fitted <- fitcopula(log(data), coord, "student", "powexp", y ~ 1, y ~ 1, y ~ 1,
nugget = 0)
plot(fitted)
## End(Not run)
Model checking of a fitted max-stable model
Description
This function produces several plots to assess the goodness of fit of a fitted max-stable model.
Usage
## S3 method for class 'maxstab'
plot(x, ..., sites)
Arguments
x |
An object of class |
... |
Here for compatibility reasons but not yet implemented. |
sites |
A vector of integer of length 4 specifying the locations to be used for the model checking. If missing, locations will be choosen randomly. |
Details
The diagonal plots are return level plots. The lower ones are qq-plots
(on the Gumbel scale) between observed pairwise maxima for each block,
e.g. year, and the ones obtained by simulations from the fitted
model. The upper plot compares the fitted extremal coefficient
functions to semi-empirical estimates from the F-madogram - see
fmadogram
. The two remaining plots are the stations
locations and a qq-plot of blockwise maxima where the block size is 4.
Value
Several diagnostic plots.
Author(s)
Mathieu Ribatet
Examples
n.site <- 20
n.obs <- 50
coord <- matrix(runif(2 * n.site, 0, 10), ncol = 2)
colnames(coord) <- c("lon", "lat")
data <- rmaxstab(n.obs, coord, "powexp", nugget = 0, range = 3, smooth =
1)
fitted <- fitmaxstab(log(data), coord, "powexp", y ~ 1, y ~ 1, y ~ 1,
nugget = 0)
plot(fitted)
Prediction of the marginal parameters for various models
Description
This function predicts the marginal GEV parameters from a fitted max-stable process, copula, penalized spline or spatial GEV model.
Usage
## S3 method for class 'maxstab'
predict(object, newdata, ret.per = NULL, std.err =
TRUE, ...)
## S3 method for class 'copula'
predict(object, newdata, ret.per = NULL, std.err =
TRUE, ...)
## S3 method for class 'pspline2'
predict(object, newdata, ...)
## S3 method for class 'spatgev'
predict(object, newdata, ret.per = NULL, ...)
Arguments
object |
An object of class 'maxstab', 'copula', 'pspline' or
'spatgev'. Most often, it will be the output of one of the
following functions: |
newdata |
An optional data frame in which to look for variables with which to predict. If omitted, the fitted values are used. |
ret.per |
Numeric vector giving the return periods for which
return levels are computed. If |
std.err |
Logical. If |
... |
further arguments passed to or from other methods. |
Value
Print several information on screen.
Author(s)
Mathieu Ribatet
Examples
## 1- Simulate a max-stable random field
n.site <- 35
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
data <- rmaxstab(50, locations, cov.mod = "whitmat", nugget = 0, range = 30,
smooth = 0.5)
## 2- Transformation to non unit Frechet margins
param.loc <- -10 + 2 * locations[,2]
param.scale <- 5 + 2 * locations[,1]
param.shape <- rep(0.2, n.site)
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i],
param.shape[i])
## 3- Fit a max-stable process with the following model for
## the GEV parameters
form.loc <- loc ~ lat
form.scale <- scale ~ lon
form.shape <- shape ~ 1
schlather <- fitmaxstab(data, locations, "whitmat", loc.form = form.loc,
scale.form = form.scale, shape.form =
form.shape)
## 4- GEV parameters estimates at each locations or at ungauged locations
predict(schlather)
ungauged <- data.frame(lon = runif(10, 0, 10), lat = runif(10, 0, 10))
predict(schlather, ungauged)
Printing objects of classes defined in the SpatialExtreme packages
Description
Methods for printing objects of classes introduced by the SpatialExtremes package.
Usage
## S3 method for class 'pspline2'
print(x, ...)
## S3 method for class 'maxstab'
print(x, digits = max(3, getOption("digits") - 3), ...)
## S3 method for class 'copula'
print(x, digits = max(3, getOption("digits") - 3), ...)
## S3 method for class 'spatgev'
print(x, digits = max(3, getOption("digits") - 3), ...)
## S3 method for class 'latent'
print(x, digits = max(3, getOption("digits") - 3), ...,
level = 0.95)
Arguments
x |
An object of class 'pspline', 'maxstab', 'copula', 'spatgev'
or 'latent'. Most often, |
digits |
The number of digits to be printed. |
... |
Other options to be passed to the |
level |
A numeric giving the significance level for the credible intervals—class 'latent' only. |
Value
Print several information on screen.
Author(s)
Mathieu Ribatet
Examples
##Define the coordinates of each location
n.site <- 30
coord <- matrix(5 + rnorm(2*n.site, sd = sqrt(2)), ncol = 2)
colnames(coord) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(30, coord, cov.mod = "whitmat", nugget = 0, range = 3,
smooth = 0.5)
## Printing max-stable objects
fit <- fitmaxstab(data, coord, "whitmat")
fit
## Printing spatial GEV objects
loc.form <- scale.form <- shape.form <- y ~ 1
fit <- fitspatgev(data, coord, loc.form, scale.form, shape.form)
fit
Method for profiling fitted max-stable objects
Description
Computes profile traces for fitted max-stable models.
Usage
## S3 method for class 'maxstab'
profile(fitted, param, range, n = 10, plot = TRUE,
conf = 0.90, method = "RJ", square = "chol", ...)
Arguments
fitted |
An object of class “maxstab”. Most often, it will be
the output of the function |
param |
A character string giving the model parameter that are to be profiled. |
range |
The range for the profiled model parameter that must be explored. |
n |
Integer. The number of profiled model parameter that must be considered. |
plot |
Logical. If |
conf |
Numeric giving the confidence interval level. |
method |
Character string. Must be one of "CB", "RJ" or "none" for the Chandler and Bate or the Rotnitzky and Jewell approaches respectively. The "none" method simply plots the profile of the log-composite likelihood. See details. |
square |
The choice for the matrix square root. This is only useful for the 'CB' method. Must be one of 'chol' (Cholesky) or 'svd' (Singular Value Decomposition). |
... |
Extra options that must be passed to the
|
Details
The Rotnitzky and Jewell approach consists in adjusting the
distribution of the likelihood ratio statistics - which under
misspecification is no longer \chi^2
distributed.
The Chandler and Bate approach adjusts the composite likelihood itself
is such a way that the usual asymptotic \chi^2
null
distribution is preserved. Note that in the current code, we use the
singular value decomposition for the computation of matrix square
roots to preserve asymmetry in the profile composite likelihood.
Value
A matrix. The first column corresponds to the values for which the profiled model parameter is fixed. The second column gives the value of the pairwise log-likelihood. The remaining columns contain the constrained maximum likelihood estimates for the remaining model parameters.
Warnings
This function can be really time consuming!
Author(s)
Mathieu Ribatet
References
Chandler, R. E. and Bate, S. (2007) Inference for clustered data using the independence loglikelihood Biometrika, 94, 167–183.
Rotnitzky, A. and Jewell, N. (1990) Hypothesis testing of regression parameters in semiparametric generalized linear models for cluster correlated data. Biometrika 77, 485–97.
Examples
## Not run:
##Define the coordinates of each location
n.site <- 30
locations <- matrix(rnorm(2*n.site, sd = sqrt(.2)), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(50, locations, cov.mod = "gauss", cov11 = 100, cov12 =
25, cov22 = 220)
##Fit a max-stable process
## 1- using the Smith's model
fitted <- fitmaxstab(data, locations, "gauss", fit.marge = FALSE)
##Plot the profile pairwise log-likelihood for the ''cov11'' parameter
profile(fitted, "cov11", range = c(20, 180))
## End(Not run)
Method for profiling (in 2d) fitted max-stable objects
Description
Computes profile surfaces for fitted max-stable models.
Usage
## S3 method for class 'maxstab'
profile2d(fitted, params, ranges, n = 10, plot = TRUE,
...)
Arguments
fitted |
An object of class “maxstab”. Most often, it will be
the output of the function |
params |
A character vector giving the two model parameters that are to be profiled. |
ranges |
A matrix corresponding to the ranges for the profiled model parameters that must be explored. Each row corresponds to one model parameter range. |
n |
Integer. The number of profiled model parameter that must be considered. |
plot |
Logical. If |
... |
Extra options that must be passed to the
|
Value
A list with two arguments: coord
and llik
. coord
is a matrix representing the grid where the profiled model parameters
are fixed. llik
the corresponding pairwise log-likelihood.
Warnings
This function can be really time consuming!
Author(s)
Mathieu Ribatet
Examples
## Not run:
##Define the coordinates of each location
n.site <- 30
locations <- matrix(rnorm(2*n.site, sd = sqrt(.2)), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(30, locations, cov.mod = "whitmat", nugget = 0, range = 30,
smooth = 0.5)
##Now define the spatial model for the GEV parameters
param.loc <- -10 + 2 * locations[,2]
param.scale <- 5 + 2 * locations[,1] + locations[,2]^2
param.shape <- rep(0.2, n.site)
##Transform the unit Frechet margins to GEV
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i],
param.shape[i])
##Define a model for the GEV margins to be fitted
##shape ~ 1 stands for the GEV shape parameter is constant
##over the region
loc.form <- loc ~ lat
scale.form <- scale ~ lon + (lat^2)
shape.form <- shape ~ 1
##Fit a max-stable process
## 1- using the Schlather representation
fitted <- fitmaxstab(data, locations, "whitmat", loc.form, scale.form,
shape.form)
##Plot the profile pairwise log-likelihood for the smooth parameter
ranges <- rbind(c(9,11), c(.3, .8))
profile2d(fitted, c("range", "smooth"), ranges = ranges)
## End(Not run)
QQ-plot for the extremal coefficient
Description
This function compares the extremal coefficients estimated from a fitted max-stable process to the ones estimated semi-parametrically.
Usage
qqextcoeff(fitted, estim = "ST", marge = "emp", xlab = "Semi-Empirical",
ylab = "Model", ...)
Arguments
fitted |
An object of class |
estim , marge |
The |
xlab , ylab |
The x and y-axis labels. |
... |
Optional arguments to be passed to the |
Value
A QQ-plot.
Author(s)
Mathieu Ribatet
References
Schlather, M. (2002) Models for Stationary Max-Stable Random Fields. Extremes 5:1, 33–44.
Schlather, M. and Tawn, J. A. (2003) A dependence measure for multivariate and spatial extreme values: Properties and inference. Biometrika 90(1):139–156.
Smith, R. L. (1990) Max-stable processes and spatial extremes. Unpublished manuscript.
See Also
Examples
##Define the coordinate of each location
n.site <- 30
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(50, locations, cov.mod = "gauss", cov11 = 10, cov12 =
5, cov22 = 22)
fitted <- fitmaxstab(data, locations, "gauss")
qqextcoeff(fitted)
QQ-plot for the GEV parameters
Description
This function compares the GEV parameters estimated separately for each location to the ones estimated from a fitted spatial model.
Usage
qqgev(fitted, xlab, ylab, ...)
Arguments
fitted |
An object of class |
xlab , ylab |
The x and y-axis labels. May be missing. |
... |
Optional arguments to be passed to the |
Value
A QQ-plot.
Author(s)
Mathieu Ribatet
References
Schlather, M. (2002) Models for Stationary Max-Stable Random Fields. Extremes 5:1, 33–44.
Schlather, M. and Tawn, J. A. (2003) A dependence measure for multivariate and spatial extreme values: Properties and inference. Biometrika 90(1):139–156.
Smith, R. L. (1990) Max-stable processes and spatial extremes. Unpublished manuscript.
See Also
Examples
##Define the coordinate of each location
n.site <- 30
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
##Simulate a max-stable process - with unit Frechet margins
data <- rmaxstab(50, locations, cov.mod = "gauss", cov11 = 100, cov12 =
25, cov22 = 220)
##Now define the spatial model for the GEV parameters
param.loc <- -10 + 2 * locations[,2]
param.scale <- 5 + 2 * locations[,1] + locations[,2]^2
param.shape <- rep(0.2, n.site)
##Transform the unit Frechet margins to GEV
for (i in 1:n.site)
data[,i] <- frech2gev(data[,i], param.loc[i], param.scale[i], param.shape[i])
##Define a model for the GEV margins to be fitted
##shape ~ 1 stands for the GEV shape parameter is constant
##over the region
loc.form <- loc ~ lat
scale.form <- scale ~ lon + I(lat^2)
shape.form <- shape ~ 1
fitted <- fitspatgev(data, locations, loc.form = loc.form, scale.form =
scale.form, shape.form = shape.form)
qqgev(fitted)
Summer annual maxima precipitation in Switzerland
Description
Maximum daily rainfall amounts over the years 1962–2008 occuring during June–August at 79 sites in Switzerland.
Usage
data(rainfall)
Format
This data set contains two R objects: 'rain' and 'coord'. 'rain' is a 47 by 79 matrix giving the amount of rain in millimeters, each column correspond to one locations. 'coord' is a 79 by 3 matrix giving the longitude, latitude and the elevation for each station, all of them being in meters.
Author(s)
Mathieu Ribatet
Examples
data(rainfall)
op <- par(mfrow = c(1,2),pty = "s", mar = c(0,0,0,0))
swiss(city = TRUE)
idx.site <- c(1, 10, 20)
points(coord[-idx.site,])
points(coord[idx.site,], pch = 15, col = 2:4)
par(mar = c(2,4,0,0))
plot(1962:2008, rain[,1], type = "b", xlab = "Year", ylab =
"Precipitation (cm)", ylim = c(0, 120), col = 2)
lines(1962:2008, rain[,10], col = 3, type = "b")
lines(1962:2008, rain[,20], col = 4, type = "b")
par(op)
Creates a model using penalized smoothing splines
Description
Creates a model using penalized smoothing splines using radial basis functions
Usage
rb(..., knots, degree, penalty)
Arguments
... |
The explicative variables for which the spline is based on. |
knots |
The coordinates of knots. See section details. |
degree |
Numeric. The degree of the spline. |
penalty |
Numeric. The penalty coefficient. |
Details
If one explicative variable is given in "...", the knots
should be a numeric vector. Otherwise, knots
should be a matrix
with the same number of column and covariates.
Value
A list giving all the required information to fit a penalized smoothing spline:
dsgn.mat |
The design matrix. |
pen.mat |
The penalization matrix. |
degree |
The degree of the smoothing spline. |
penalty |
The penalty of the smoothing spline. |
knots |
The knots of the smoothing spline. |
data |
The explicative variables (e.g. covariates). |
call |
How was the |
Warning
This function is not supposed to be called directly. rb
is
supposed to be embedded in a R formula.
Author(s)
Mathieu Ribatet
See Also
Examples
n.site <- 30
locations <- matrix(runif(2*n.site, 0, 10), ncol = 2)
colnames(locations) <- c("lon", "lat")
knots <- quantile(locations[,2], 1:5/6)
form <- y ~ rb(lat, knots = knots, degree = 3, penalty = .5)
Fits a penalized spline with radial basis functions to data
Description
Fits a penalized spline with radial basis functions to data.
Usage
rbpspline(y, x, knots, degree, penalty = "gcv", ...)
Arguments
y |
The response vector. |
x |
A vector/matrix giving the values of the predictor
variable(s). If |
knots |
A vector givint the coordinates of the knots. |
degree |
The degree of the penalized smoothing spline. |
penalty |
A numeric giving the penalty coefficient for the
penalization term. Alternatively, it could be either 'cv' or 'gcv'
to choose the |
... |
Details
The penalized spline with radial basis is defined by:
f(x) = \beta_0 + \beta_1 x + \ldots + \beta_{m-1} x^{m-1} +
\sum_{k=0}^{K-1} \beta_{m+k} || x - \kappa_k ||^{2m - 1}
where \beta_i
are the coefficients to be estimated,
\kappa_i
are the coordinates of the i-th knot and
m = \frac{d+1}{2}
where d
corresponds to
the degree
of the spline.
The fitting criterion is to minimize
||y - X\beta||^2 + \lambda^{2m-1} \beta^T K \beta
where \lambda
is the penalty coefficient and
K
the penalty matrix.
Value
An object of class pspline
.
Author(s)
Mathieu Ribatet
References
Ruppert, D. Wand, M.P. and Carrol, R.J. (2003) Semiparametric Regression Cambridge Series in Statistical and Probabilistic Mathematics.
See Also
Examples
n <- 200
x <- runif(n)
fun <- function(x) sin(3 * pi * x)
y <- fun(x) + rnorm(n, 0, sqrt(0.4))
knots <- quantile(x, prob = 1:(n/4) / (n/4 + 1))
fitted <- rbpspline(y, x, knots = knots, degree = 3)
fitted
plot(x, y)
lines(fitted, col = 2)
Simulation from copula based models with unit Frechet margins
Description
This function generates realisations from the Gaussian and Student copula with unit Frechet margins.
Usage
rcopula(n, coord, copula = "gaussian", cov.mod = "whitmat", grid =
FALSE, control = list(), nugget = 0, range = 1, smooth = 1, DoF = 1)
Arguments
n |
Integer. The number of observations. |
coord |
A vector or matrix that gives the coordinates of each location. Each row corresponds to one location - if any |
copula |
A character string that specifies the copula to be used, i.e., "gaussian" or "student". |
cov.mod |
A character string that gives the correlation function family to be used. This must be one of "whitmat", "cauchy", "powexp" and "bessel" for the Whittle-Matern, the cauchy, the powered exponential and the bessel correlation functions. |
grid |
Logical. Does the coordinates represent grid points? |
control |
A named list that control the simulation of the
gaussian process — see |
nugget , range , smooth , DoF |
Numerics. The parameters of the copula. |
Value
A matrix containing observations from the required max-stable
model. Each column represents one stations. If grid = TRUE
, the
function returns an array of dimension nrow(coord) x nrow(coord) x n.
Author(s)
Mathieu Ribatet
References
Demarta, S. and McNeil, A. J. (2005) The t Copula and Related Copulas International Statistical Review 73:1, 111–129.
Davison, A. C., Padoan, S. A. and Ribatet, M. (2010) Statistical Modelling of Spatial Extremes Submitted to Statistical Science.
See Also
Examples
n.site <- 25
n.obs <- 50
coord <- matrix(runif(2 * n.site, 0, 10), ncol = 2)
data1 <- rcopula(n.obs, coord, "student", "whitmat", range = 3, DoF = 3)
x <- y <- seq(0, 10, length = 100)
data2 <- rcopula(1, cbind(x, y), "gaussian", "whitmat", range = 3, grid
= TRUE)
image(x, y, log(data2), col = rainbow(64))
Gaussian Random Fields Simulation
Description
This functions generates gaussian random fields.
Usage
rgp(n, coord, cov.mod = "powexp", mean = 0, nugget = 0, sill = 1, range
= 1, smooth = 1, grid = FALSE, control = list())
Arguments
n |
Integer. The number of replications. |
coord |
The locations coordinates for which the gaussian process is observed. |
cov.mod |
Character string. The covariance model used. Must be
one of "whitmat", "cauchy", "powexp" of "cauchy". See the
|
mean |
Numeric. The mean of the gaussian random field. |
nugget |
Numeric. The nugget of the gaussian random field. |
sill |
Numeric. The sill parameter in the covariance function. |
range |
Numeric. The range parameter in the covariance function. |
smooth |
Numeric. The smooth parameter in the covariance function. |
grid |
Logical. Does |
control |
A named list with arguments 'nlines' (number of lines
of the TBM simulation) and 'method' the name of the simulation
method - must be one of 'exact', 'tbm' or 'circ'. If 'method' is
|
Value
A matrix or an array containing the random field replicates.
Author(s)
Mathieu Ribatet
See Also
link{rmaxstab}
Examples
x <- y <- seq(0, 20, length = 100)
coord <- cbind(x, y)
gp <- rgp(1, coord, cov.mod = "whitmat", grid = TRUE)
filled.contour(x, y, gp, color.palette = terrain.colors)
Simulation from max-linear models
Description
This function generates realisations from a max-linear model.
Usage
rmaxlin(n, coord, cov.mod = "gauss", dsgn.mat, grid = FALSE, p = 500,
...)
Arguments
n |
Integer. The number of observations. |
coord |
A vector or matrix that gives the coordinates of each
location. Each row corresponds to one location - if any. May be
missing if |
cov.mod |
A character string that specifies the max-linear
model. Currently only the discretized Smith model is implemented,
i.e., |
dsgn.mat |
The design matrix of the max-linear model — see
Section Details. May be missing if |
grid |
Logical. Does |
p |
An integer corresponding to the number of unit Frechet random variable used in the max-linear model — see Section Details. |
... |
The parameters of the max-stable model — see Section Details. |
Details
A max-linear process \{Y(x)\}
is defined by
Y(x) = \max_{j=1, \ldots, p} f_j(x) Z_j, \qquad x \in
R^d,
where p
is a positive integer, f_j
are non
negative functions and Z_j
are independent unit Frechet
random variables. Most often, the max-linear process will be generated
at locations x_1, \ldots, x_k \in R^d
and an alternative but equivalent formulation is
\bf{Y} = A \odot \bf{Z},
where \mathbf{Y} =
\{Y(x_1), \ldots, Y(x_k)\}
,
\mathbf{Z} = (Z_1, \ldots, Z_p)
,
\odot
is the max-linear operator, see the first equation, and
A
is the design matrix of the max-linear model. The design
matrix A
is a k \times p
matrix with non
negative entries and whose i
-th row is \{f_1(x_i),
\ldots, f_p(x_i)\}
.
Currently only the discretized Smith model is implemented for which
f_j(x) = c(p) \varphi(x - u_j ; \Sigma)
where \varphi(\cdot; \Sigma)
is the
zero mean (multivariate) normal density with covariance matrix
\Sigma
, u_j
is a sequence of deterministic
points appropriately chosen and c(p)
is a constant
ensuring unit Frechet margins. Hence if this max-linear model is used,
users must specify var
for one dimensional processes, and
cov11
, cov12
, cov22
for two dimensional
processes.
Value
A matrix containing observations from the max-linear model. Each
column represents one stations. If grid = TRUE
, the function
returns an array of dimension nrow(coord) x nrow(coord) x n.
Author(s)
Mathieu Ribatet
References
Wang, Y. and Stoev, S. A. (2011) Conditional Sampling for Max-Stable Random Fields. Advances in Applied Probability.
See Also
Examples
## A one dimensional simulation from a design matrix. This design matrix
## corresponds to a max-moving average process MMA(alpha)
n.site <- 250
x <- seq(-10, 10, length = n.site)
## Build the design matrix
alpha <- 0.8
dsgn.mat <- matrix(0, n.site, n.site)
dsgn.mat[1,1] <- 1
for (i in 2:n.site){
dsgn.mat[i,1:(i-1)] <- alpha * dsgn.mat[i-1,1:(i-1)]
dsgn.mat[i,i] <- 1 - alpha
}
data <- rmaxlin(3, dsgn.mat = dsgn.mat)
matplot(x, t(log(data)), pch = 1, type = "l", lty = 1, ylab =
expression(log(Y(x))))
## One realisation from the discretized Smith model (2d sim)
x <- y <- seq(-10, 10, length = 100)
data <- rmaxlin(1, cbind(x, y), cov11 = 3, cov12 = 1, cov22 = 4, grid =
TRUE, p = 2000)
image(x, y, log(data), col = heat.colors(64))
Simulation of Max-Stable Random Fields
Description
This function generates realisation from a max-stable random field.
Usage
rmaxstab(n, coord, cov.mod = "gauss", grid = FALSE, control =
list(), ...)
Arguments
n |
Integer. The number of observations. |
coord |
A vector or matrix that gives the coordinates of each location. Each row corresponds to one location - if any. |
cov.mod |
A character string that gives the max-stable model. This must be one of "gauss" for the Smith model, "brown" for the Brown–Resnick model, or "whitmat", "cauchy", "powexp" and "bessel" for the Schlather model with the given correlation family. If the latters are prefix by a "t", e.g., "twhitmat", this would corresponds to the extremal-t model. |
grid |
Logical. Does the coordinates represent grid points? |
control |
A named list with arguments 'nlines' (number of lines of the TBM simulation), 'method' the name of the simulation method - must be one of 'exact', 'tbm' or 'circ', and 'uBound' the uniform upper bound. See details. |
... |
The parameters of the max-stable model. See details. |
Details
Users must supply the parameters for the max-stable model. For the
Schlather model, users should supply the "nugget", "range" and "smooth"
parameter values. For the Smith model, if coord
is univariate
you must specify var
, otherwise users should supply the
covariance parameters i.e. parameters with names such as cov11
,
cov12
, ... For the extremal-t model, users should supply the
"DoF", "nugget", "range" and "smooth" parameters. Finally for the
Brown–Resnick model, users should supply the "range" and the "smooth"
parameters.
Here are the details for each allowed components of 'control'. If
'method' is NULL
(default), the function tries to find the
most appropriate simulation technique. Current simulation techniques
are a direct approach, i.e. Cholesky decomposition of the covariance
matrix, the turning bands and the circular embedding methods. If
'nlines' is NULL
, it is set to 1000. If 'uBound' is
NULL
, it is set to reasonable values - for example 3.5 for the
Schlather model.
Value
A matrix containing observations from the required max-stable
model. Each column represents one stations. If grid = TRUE
, the
function returns an array of dimension nrow(coord) x nrow(coord) x n.
Author(s)
Mathieu Ribatet
References
Schlather, M. (2002) Models for Stationary Max-Stable Random Fields. Extremes 5:1, 33–44.
Smith, R. L. (1990) Max-stable processes and spatial extremes. Unpublished manuscript.
See Also
Examples
## 1. Smith's model
set.seed(8)
x <- y <- seq(0, 10, length = 100)
coord <- cbind(x, y)
data <- rmaxstab(1, coord, "gauss", cov11 = 9/8, cov12 = 0, cov22 = 9/8,
grid = TRUE)
##We change to unit Gumbel margins for visibility
filled.contour(x, y, log(data), color.palette = terrain.colors)
## 2. Schlather's model
data <- rmaxstab(1, coord, cov.mod = "powexp", nugget = 0, range = 3,
smooth = 1, grid = TRUE)
filled.contour(x, y, log(data), color.palette = terrain.colors)
## 3. Brown--Resnick's model **** Only available for non gridded points currently ****
data <- rmaxstab(1, x, cov.mod = "brown", range = 3, smooth = 0.5)
plot(x, log(data), type = "l")
## 4. Extremal-t model *** Very time consuming for 2d grids ***
data <- rmaxstab(1, x, "twhitmat", DoF = 4, nugget = 0, range = 3,
smooth = 0.7)
plot(x, log(data), type = "l")
Map of the Switzerland.
Description
Plot a map of Switzerland and optionnaly some cities.
Usage
swiss(city = FALSE, add = FALSE, axes = FALSE, km = TRUE, xlab = "",
ylab = "", ...)
Arguments
city |
Logical. It |
add |
Logical. Should the map be added to an existing plot? |
axes |
Logical. Should the axis be displayed? |
km |
Logical. If |
xlab , ylab |
The x and y-axis labels. |
... |
Optional arguments to be passed to the |
Value
A graphic window.
Author(s)
Dominik Schaub
Examples
swiss()
Elevation in Switzerland
Description
Data required for plotting a Switzerland map with elevation.
Usage
data(swissalt)
Format
This data set contains three R objects. 'alt.mat' is a 192 by 115 matrix giving the elevation at the grid points defined by 'lon.vec' and 'lat.vec'.
Author(s)
Mathieu Ribatet
Examples
data(swissalt)
layout(matrix(c(1,2), 1), width = c(3.5,1))
mar <- rep(0, 4)
op <- par(mar = mar)
breaks <- seq(0, 2000, by = 250)
image(lon.vec, lat.vec, alt.mat, col = terrain.colors(length(breaks) - 1),
xaxt = "n", yaxt = "n", bty = "n", xlab = "", ylab = "", xlim =
c(480, 840), ylim = c(58, 300))
swiss(add = TRUE, city = TRUE)
##Heat bar
mar <- c(3, 3, 3, 4)
par(las = 1, mar = mar)
plot.new()
plot.window(xlim = c(0, 1), ylim = range(breaks), xaxs = "i",
yaxs = "i")
rect(0, breaks[-length(breaks)], 1, breaks[-1], col = terrain.colors(length(breaks) - 1),
border = NA)
axis(4, at = breaks[-length(breaks)])
box()
title("Elevation\n(meters)")
par(op)
Detecting spatial trends graphically
Description
This function performs a symbol plot to help in identifying any spatial trends
Usage
symbolplot(data, coord, which = "gev", plot.border = NULL,
col = c("#FF000080", "#0000FF80"), plot.legend = TRUE, scale = 1)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
which |
A character string specifying which values should be plotted. Must be one of "gev" (for the GEV marginal parameters), "mean" for pointwise mean or "median" for pointwise median. |
plot.border |
An R function that plots the border of the study
region. If |
col |
A vector of length 2 giving the colors to be used to fill the circles. |
plot.legend |
Logical. If |
scale |
Positive number. It enables to enlarge (if scale > 1) or reduce (if 0 < scale < 1) the radius of the plotted circles to get a better display. |
Details
This function will plot several circles whose center is located at the weather stations and whose radius is proportional to the departure of the value at that position to the areal mean value.
Value
A plot.
Author(s)
Mathieu Ribatet
Examples
## Symbol plot for the Swiss rainfall data set
data(rainfall)
symbolplot(rain, coord, plot.border = swiss)
Empirical variogram
Description
This function computes the empirical variogram.
Usage
variogram(data, coord, n.bins, xlab, ylab, angles = NULL, add = FALSE,
xlim = c(0, max(dist)), ...)
Arguments
data |
A matrix representing the data. Each column corresponds to one location. |
coord |
A matrix that gives the coordinates of each location. Each row corresponds to one location. |
n.bins |
The number of bins to be used. If missing, pairwise madogram estimates will be computed. |
xlab , ylab |
The x-axis and y-axis labels. May be missing. Note
that |
angles |
A numeric vector. A partition of the interval
|
add |
Logical. If |
xlim |
A numeric vector of length 2 specifying the x coordinate range. |
... |
Additional options to be passed to the |
Value
A graphic and (invisibly) a matrix with the lag distances and the empirical variogram estimates.
Author(s)
Mathieu Ribatet
See Also
Examples
n.site <- 20
n.obs <- 100
coord <- matrix(runif(2 * n.site, 0, 10), ncol = 2)
data <- rgp(n.obs, coord, "powexp", sill = 2, range = 3, smooth = 1)
variogram(data, coord)
Van der Corput Sequence
Description
This function generates the three dimensional Van der Corput sequence
on the half unit Sphere of R^3
- eventually randomly
rotated.
Usage
vdc(n, rand.rot = FALSE)
Arguments
n |
Integer. The length of the sequence. |
rand.rot |
Logical. Should the sequence be randomly rotated? |
Value
A matrix giving the coordinates of the points in the half unit sphere.
Author(s)
Mathieu Ribatet
References
Freulon, X., de Fouquet, C., 1991. Remarques sur la pratique des bandes tournantes a trois dimensions. Cahiers de geostatistique, Fascicule 1, Centre de Geostatistique, Ecole des Mines de Paris, Fontainebleau, pp. 101–117.
Examples
vdc(10)
Annual maxima wind gusts in the Netherlands.
Description
Annual maxima wind gusts (km/h) in the Netherlands recorded at 35 weather stations over the period 1971–2012.
Usage
data(wind)
Format
This data set contains two R objects: 'coord' and 'wind'. 'wind' is a 42 by 35 matrix giving the wind gust speeds in km/h—with missing values, each column correspond to one location. 'coord' is a 35 by 3 matrix giving the longitude (degree), latitude (degree) and elevation (m).
Author(s)
Mathieu Ribatet
Examples
##require(maps)
data(wind)
par(mar = rep(0, 4))
maps::map(xlim = c(0, 9), ylim = c(47.5, 57.5))
points(coord[,1:2], pch = 15)