[1] FALSE
[1] 4.440892e-16
[1] TRUE
R Foundations 2024
November 7, 2024
Numerical precision
Random numbers
Simulation
Optimisation
This material is largely reproduced from Ruth Ripley’s 2013 APTS course
Some basic understanding is needed of how R stores and does arithmetic on numbers to avoid being surprised by the results it gives.
From the R FAQ:
The only numbers that can be represented exactly in R’s numeric type are integers and fractions whose denominator is a power of 2. Other numbers have to be rounded to (typically) 53 binary digits accuracy. As a result, two floating point numbers will not reliably be equal unless they have been computed by the same algorithm, and not always even then.
The function all.equal()
compares two objects using a numeric tolerance of .Machine$double.eps^0.5
.1 If you want much greater accuracy than this you will need to consider error propagation carefully.
For more information, see e.g.
David Goldberg (1991), “What Every Computer Scientist Should Know About Floating-Point Arithmetic”, ACM Computing Surveys, 23/1, 5–48, also available via https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html.
Ruth Ripley’s numerical considerations notes for APTS 2013.
Statistical models are often mathematically intractable
Generate multiple samples for a model by simulation
Use these samples to investigate the behaviour of the model
Need realisations of random variables with various different distributions, our random numbers
Details of how to use the random numbers to create samples will not be considered here
Random numbers calculated on a computer are not random.
They are pseudo-random, following a predicted sequence, but in the short-term (i.e. anything but the very long-term) will appear to be random.
This is useful, as two sets of random numbers of the same size from the same generator using the same initial value will be exactly the same. This is crucial for reproducibility.
R provides functions for generating random samples from standard distibutions.
.Random.seed
which is in your workspace..Random.seed
does not exist, one will be created, with a value generated from the time.set.seed(n)
will set .Random.seed
to a value derived from the argument n
.set.seed()
to repeat the same sequence of random numbers..Random.seed
. For safety, you can specify the kind on calls to set.seed
. Use set.seed(n, kind="default")
to ensure you are using R’s default..Random.seed
within your functions, but take care with scope.set.seed()
examplesample()
The sample()
function re-samples from a data vector, with or without replacement.
sample(x, size, replace = FALSE, prob = NULL)
x |
positive integer or a vector |
size |
non-negative integer, number of items to choose |
replace |
logical - should sampling be with replacement |
prob |
a vector of probability weights for obtaining the elements of the vector being sampled. |
sample()
examplessample(n)
sample(x)
sample(x, replace = TRUE)
sample(x, size = n)
sample(x, n, replace = TRUE)
sample(x, n, replace = TRUE, prob = probs)
A random permutation of 1, …, n
See also sample.int()
:
A random permutation of x
for length(x) > 1
Many statistical distributions are built into R. Each has a root name, e.g. norm
for the normal distribution.
There are four functions for each distribution, with different letters prefixing the root name:
p
for probability, the cumulative distribution functionq
for quantile, the inverse c.d.f.d
for density, the density function, p.d.f.r
for random, a random sample from the distribution.So, for the normal distribution, we have the functions pnorm
, qnorm
, dnorm
, rnorm
.
Distribution | Base name | Parameters |
---|---|---|
beta | beta |
shape1 , shape2 |
binomial | binom |
size , prob |
Cauchy | cauchy |
location , scale |
chi-squared | chisq |
df |
exponential | exp |
rate |
F | f |
df1 , df2 |
gamma | gamma |
shape , rate |
geometric | geom |
p |
hypergeometric | hyper |
m , n , k |
log-normal | lnorm |
meanlog , sdlog |
logistic | logis |
location , scale |
negative binomial | nbinom |
size , prob |
normal | norm |
mean , sd |
Poisson | pois |
lambda |
Student t | t |
df |
uniform | unif |
min , max |
Weibull | weibull |
shape , scale |
To obtain a sample of size n
, use the r
function for the distribution with the first argument n
, and subsequent arguments the parameters for that distribution. The parameters often have default values. E.g.
runif(n, min=0, max=1) |
Uniform |
rnorm(n, mean=0, sd=1) |
Normal |
rexp(n, rate=1) |
Exponential with mean 1/rate |
rt(n, df) |
t with df degrees of freedom |
rbinom(n, size, prob) |
Binomial, success in size trials with probability of success prob |
Table from https://www.johndcook.com/blog/distributions_r_splus/
Sometimes sample()
can lead to nasty surprises.
Consider the following code. Which line(s) might cause a problem?
Run the code to see what happens.
Warning
You need to be careful when using sample()
programmatically (i.e. in your function or simulation).
See ?sample
for a safer version and check you understand how the proposed resample()
function in the examples works.
optimize
(or optimise
) finds a (possibly local) mimimum of a function in a specified interval with respect to its first argument.
optimize
f <- function(x, a) {
(x - a)^2
}
xmin <- optimize(f, interval = c(0, 1), a = 1/3)
# or
xmin <- optimize(function(x, a) {(x - a)^2},
interval = c(0, 1), a = 1/3)
xmin
$minimum
[1] 0.3333333
$objective
[1] 0
Note how the (fixed) parameter a
is passed into f
.
optimize()
argumentsAn interval within which to search must be specified, either by interval
or by upper
and lower
To maximise set maximum = TRUE
Accuracy can be set using the tol
argument
Note the order of arguments: optimize(f, interval, ..., lower, upper, maximum, tol)
The ...
can be named or unnamed and will be passed to f
Arguments after the ...
must be specified by names.
optimize
returns a list with two items:
minimum
: the value of x
at which f(x)
is minimisedobjective
: the value of f(x)
at x = minimum
optim()
which includes several different algorithms.optim()
methodsn
parameters, choose a polygon with n+1
vertices. At each step, alter vertex with minimum f(x)
to improve the objective function, by reflection, expansion or contractionoptim()
that can do thisoptimize()
optim()
optim(par, fn, gr=NULL, ..., method=c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"), lower=-Inf, upper=Inf, control=list(), hessian=FALSE)
par |
starting values of the parameters |
fn |
function to be optimised (supply as for optimize ) |
gr |
function to calculate the derivative, only relevant for methods “BFGS”, “CG” or “L-BFGS-B” |
… |
other parameters for (both) fn and gr |
method |
algorithm to use |
lower , upper |
vector of limits for parameters (only allowed it method="L-BFGS-B" |
control |
control options (see next slide) |
hessian |
logical: whether to return a hessian estimate calculated by finite differences |
optim()
: control optionsThere are very many. The most important are:
trace |
A positive integer: higher values give more information |
fnscale |
An overall scaling: if negative, maximisation will be performed |
parscale |
A vector of scalings for the parameters |
maxit |
Maximum number of iterations to be performed. May be used to terminate unsuccessful attempts. Also used to perform one or two steps if convergence is unimportant |
type |
Used to select formula for “CG” method |
optim()
: components of return valuepar |
best set of parameters |
value |
value of fn corresponding to par |
counts |
number of calls to fn and gr : excludes calls for purposes of approximating derivatives or Hessian |
convergence |
error code: 0=success, 1=maxit reached, 10=degeneracy of Nelder-Mead simplex, 51=warning from “L-BFGS-B”, 52=error from “L-BFGS-B” |
message |
further information, if any |
hessian |
if requested, a symmetric matrix estimate of the Hessian at the solution |
Use optim()
to find an approximate global minimum
Use optim()
again to improve locally (aiming for an objective of 67.46773).
Take a look at the introduction to the Wikipedia page for Rosenbrock’s banana function
Copy this code for the fn
and gr
c(0, 0)
, experiment with the optim()
function to see which methods convergeThis material is reproduced in large part directly from the APTS 2013/14 resources by Ruth Ripley:
Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).