Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Question

In AWS, lets say you created an EC2 instance that you configured a bootstrap script for (i.e. to install some packages, like httpd, when instance first starts), from inside the instance shell, issue command to see the content of that bootstrap script.

Answer

sudo curl http:/**/169.254.169.254**/latest/**user-data**

^^ note that instead of getting just user-data scripts with above command, you can get instance meta-data like public-ipv4 by issuing similar command: sudo curl http://169.254.169.254/latest/meta-data/public-ipv4

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Chretien et al. (2014)'s review suggests need for:

- use of good practices in influenza forecasting (e.g., sensitivity analysis);
- direct comparisons of diverse approaches;
- assessment of model calibration;
- integration of subjective expert input;
- operational research in pilot, real-world applications; and
- improved mutual understanding among modelers and public health officials

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

The branching process approximation is a CTMC, but near the disease-free equilibrium, the rates are linear (Table 2).

Three important assumptions underlie the branching process approximation:

- Each infectious individual behavior is independent from other infectious individuals. Reasonable if a small number of infectious individuals is introduced into a large homogeneously-mixed population (assumption (3)).
- Each infectious individual has the same probability of recovery and the same probability of transmitting an infection. Reasonable in a homogeneously-mixed population with constant transmission and recovery rates,
*b*and*g*. - The susceptible population is sufﬁciently large.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Question

For a few initial infectious individuals, the branching process either [...] or hits zero.

Answer

grows exponentially

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

For a few initial infectious individuals, the branching process either grows exponentially or hits zero.

Question

For a few initial infectious individuals, the branching process either grows exponentially or [...].

Answer

hits zero

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

For a few initial infectious individuals, the branching process either grows exponentially or hits zero.

This paper (Osthus et al., 2019) makes contributions and advances in the following ways.

- We introduce and demonstrate the importance of discrepancy modeling to the growing and consequential ﬁeld of ﬂu forecasting. Discrepancy modeling is done hierarchically, allowing information to be shared across available ﬂu seasons.
- We demonstrate the superiority of our approach relative to all models that competed in the CDC’s 2015–2016 and 2016–2017 ﬂu forecasting challenges, providing yet another instance where discrepancy modeling is not only conceptually appealing but also practically eﬀective.
- In an eﬀort to advance ﬂu forecasting capabilities, much eﬀort has been spent identifying possibly useful, nontraditional data sources such as Google (Ginsberg et al., 2009) and Wikipedia (Generous et al., 2014). Alternatively, as we demonstrate, ﬂu forecasting can be improved through carefully made modeling choices, making use of the available traditional data hierarchically.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#2018_Adalja_etal_pandemic_potential_pathogens #reading

Attributes likely to be essential components of any GCBR-level pathogen include:

- efficient human-to-human transmissibility,
- an appreciable case fatality rate,
- the absence of an effective or widely available medical countermeasure,
- an immunologically naïve population,
- virulence factors enabling immune system evasion, and
- respiratory mode of spread.

Additionally, the ability to transmit during incubation periods and/or the occurrence of mild illnesses would further augment spread.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Question

In AWS, lets say you have a VPC with a private subnet and you want instances in your private subnet to talk to S3 but you don't want to configure a NAT for your private subnet. How do you achieve this (be as specific as possible, i.e there are two steps that take place that get this achieved).

Answer

A: 1) you create a **VPC gateway endpoint for S3** (from the "Endpoints" sectoin of VPC console), and 2) **you update your route table** for your private subnet to send the traffic for S3 to this newly created VPC endpoint gateway (NOTE: the **route is added for your automatically** when you create the S3 Gateway Endpoint)

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

#reading

“[Statistical] algorithmic invention is a free-wheeling and adventurous enterprise, with inference playing catch-up as it strives to assess the accuracy, good or bad, of some hot new algorithmic methodology.” [Efron and Hastie, 2016, pxvi]

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#reading

Typically, estimators can be divided into two types: point estimators and set estimators.

- A point estimator which maps from the sample space X to a point in the parameter space Θ.
- A set estimator which maps from X to a set in Θ.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#reading

Cox and Hinkley (1974; p12) observe, if we are interested in comparing two possible values of θ using the likelihood, then we should consider the ratio of the likelihoods rather than, for example, the difference

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#reading

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#reading

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#reading

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Question

[...] is an often used and eﬀective modeling approach in the ﬁeld of computer experiments, where systematic deviations between mechanistic models and data can be common (e.g., Kennedy and O’Hagan, 2001;Bayarri et al., 2007; Higdon et al., 2008;Brynjarsd´ottir and O’Hagan, 2014).

Answer

Discrepancy modeling

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Discrepancy modeling is an often used and eﬀective modeling approach in the ﬁeld of computer experiments, where systematic deviations between mechanistic models and data can be common (e.g., Kennedy and O’H

#reading

a sufficient statistic S is a vector

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Note that, in general, a sufficient statistic S is a vector and that if S is sufficient then so is any one-to-one function of S.

#reading

if S is sufficient then so is any one-to-one function of S

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Note that, in general, a sufficient statistic S is a vector and that if S is sufficient then so is any one-to-one function of S.

#reading

Any function of a random variable X is termed a statistic.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Definition 2 (Statistic; estimator) Any function of a random variable X is termed a statistic. If T is a statistic then T = t(X) is a random variable and t = t(x) the corresponding value of the random variable when X = x. In general, T is a vector. A statistic designed to estimat

#reading

If T is a statistic then T = t(X) is a random variable and t = t(x) the corresponding value of the random variable when X = x.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Definition 2 (Statistic; estimator) Any function of a random variable X is termed a statistic. If T is a statistic then T = t(X) is a random variable and t = t(x) the corresponding value of the random variable when X = x. In general, T is a vector. A statistic designed to estimate θ is termed an estimator.

#reading

A statistic designed to estimate θ is termed an estimator.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

andom variable X is termed a statistic. If T is a statistic then T = t(X) is a random variable and t = t(x) the corresponding value of the random variable when X = x. In general, T is a vector. <span>A statistic designed to estimate θ is termed an estimator. <span>

#reading

Typically, estimators can be divided into two types called: point estimators and set estimators.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Typically, estimators can be divided into two types: point estimators and set estimators. A point estimator which maps from the sample space X to a point in the parameter space Θ. A set estimator which maps from X to a set in Θ.

#reading

Point estimators map from the sample space X to a point in the parameter space Θ.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Typically, estimators can be divided into two types: point estimators and set estimators. A point estimator which maps from the sample space X to a point in the parameter space Θ. A set estimator which maps from X to a set in Θ.

#reading

Set estimators map from the sample space X to a set in Θ.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

span> Typically, estimators can be divided into two types: point estimators and set estimators. A point estimator which maps from the sample space X to a point in the parameter space Θ. A set estimator which maps from X to a set in Θ. <span>

Tags

#reading

Question

The likelihood for θ given observations x is L_{X}(θ; x) = [...]

Answer

f_{X}(x |θ)

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Definition 3 (Likelihood) The likelihood for θ given observations x is LX(θ; x) = fX(x |θ), θ ∈ Θ regarded as a function of θ for fixed x.

#reading

To compare two parameters based on likelihood, we should consider the ratio of the likelihoods.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

Cox and Hinkley (1974; p12) observe, if we are interested in comparing two possible values of θ, θ1 and θ2 say, using the likelihood then we should consider the ratio of the likelihoods rather than, for example, the difference

Tags

#reading

Question

To compare two parameters based on likelihood, we should consider the [...] of the likelihoods.

Answer

ratio

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

To compare two parameters based on likelihood, we should consider the ratio of the likelihoods.

Tags

#reading

Question

[...] observe, if we are interested in comparing two possible values of θ, θ_{1} and θ_{2} say, using the likelihood then we should consider the ratio of the likelihoods rather than, for example, the difference

Answer

Cox and Hinkley (1974; p12)

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Cox and Hinkley (1974; p12) observe, if we are interested in comparing two possible values of θ, θ1 and θ2 say, using the likelihood then we should consider the ratio of the likelihoods rather than, for example, t

Tags

#reading

Question

Answer

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

The weak likelihood principle: If X = x and X = y are two observations for the experiment EX = {X, Θ, fX(x |θ)} such that LX(θ; y) = c(x, y)LX(θ; x) for all θ ∈ Θ then the inference about θ should be the same irrespective of whether X = x or X = y was observed.

Tags

#reading

Question

[...]: If *X = x* and *X = y* are two observations for the experiment *E*_{X} = {X, Θ, f_{X}(x |θ)} such that *L*_{X}(θ; y) = c(x, y)L_{X}(θ; x) for all *θ ∈ Θ* then the inference about *θ* should be the same irrespective of whether *X = x* or *X = y* was observed.

Answer

The weak likelihood principle

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

The weak likelihood principle: If X = x and X = y are two observations for the experiment EX = {X, Θ, fX(x |θ)} such that LX(θ; y) = c(x, y)LX(θ; x) for all θ ∈ Θ then the inference about θ should be the same irresp

Tags

#reading

Question

The weak likelihood principle: If *X = x* and *X = y* are two observations for the experiment *E*_{X} = {X, Θ, f_{X}(x |θ)} such that *L*_{X}(θ; y) = c(x, y)L_{X}(θ; x) for all *θ ∈ Θ* then *[...]*

Answer

the inference about *θ* should be the same irrespective of whether *X = x* or *X = y* was observed.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

The weak likelihood principle: If X = x and X = y are two observations for the experiment EX = {X, Θ, fX(x |θ)} such that LX(θ; y) = c(x, y)LX(θ; x) for all θ ∈ Θ then the inference about θ should be the same irrespective of whether X = x or X = y was observed.

* *

Tags

#reading

Question

Answer

The strong likelihood principle

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

The strong likelihood principle: Let EX and EY be two experiments which have the same parameter θ. If X = x and Y = y are two observations such that LY (θ; y) = c(x, y)LX(θ; x) for all θ ∈ Θ then the inference about θ

* *

Tags

#reading

Question

Answer

have the same parameter *θ*

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

The strong likelihood principle: Let EX and EY be two experiments which have the same parameter θ. If X = x and Y = y are two observations such that LY (θ; y) = c(x, y)LX(θ; x) for all θ ∈ Θ then the inference about θ should be the same irrespective of whether X = x or Y = y was obs

* *

Tags

#reading

Question

Answer

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

The strong likelihood principle: Let EX and EY be two experiments which have the same parameter θ. If X = x and Y = y are two observations such that LY (θ; y) = c(x, y)LX(θ; x) for all θ ∈ Θ then the inference about θ should be the same irrespective of whether X = x or Y = y was observed.

* *

Tags

#reading

Question

Answer

the inference about *θ* should be the same irrespective of whether *X = x* or *Y = y* was observed.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

e strong likelihood principle: Let EX and EY be two experiments which have the same parameter θ. If X = x and Y = y are two observations such that LY (θ; y) = c(x, y)LX(θ; x) for all θ ∈ Θ then <span>the inference about θ should be the same irrespective of whether X = x or Y = y was observed. <span>

* *

Tags

#reading

Question

Answer

Sufficient statistic

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Sufficient statistic: A statistic S = s(X) is sufficient for θ if the conditional distribution of X, given the value of s(X) (and θ), fX|S(x | s, θ), does not depend upon θ.

* *

Tags

#reading

Question

Answer

the conditional distribution of *X*, given the value of *s(X)* (and *θ*), *f*_{X|S}(x | s, θ), does not depend upon *θ*.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Sufficient statistic: A statistic S = s(X) is sufficient for θ if the conditional distribution of X, given the value of s(X) (and θ), fX|S(x | s, θ), does not depend upon θ.

* *

#reading

It should be clear from the definition of a sufficient statistic that the sufficiency of S for θ is dependent upon the choice of the family of distributions in the model.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

Sufficiency for a parameter θ can be viewed as the idea that S captures all of the information about θ contained in X. Having observed S, nothing further can be learnt about θ by observing X as f_{X|S}(x | s, θ) has no dependence on θ.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

Following Section 2.2(iii) of Cox and Hinkley (1974), we may interpret sufficiency as follows.

- Consider two individuals who both assert the model E = {X, Θ, f
_{X}(x |θ)}. - The first individual observes x directly.
- The second individual also observes x but in a two stage process:
- They first observe a value s(x) of a sufficient statistic S with distribution f
_{S}(s |θ). - They then observe the value x of the random variable X with distribution f
_{X|S}(x |s) which does not depend upon θ.

- They first observe a value s(x) of a sufficient statistic S with distribution f

It may well then be reasonable to argue that, as the final distribution for X for the two individuals are identical, the conclusions drawn from the observation of a given x should be identical for the two individuals. That is, they should make the same inference about θ.

For the second individual, when sampling from f_{X|S}(x |s) they are sampling from a fixed distribution and so, assuming the correctness of the model, only the first stage is informative: all of the knowledge about θ is contained in s(x).

If one takes these two statements together then the inference to be made about θ depends only on the value s(x) and not the individual values x i contained in x.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

There are two broad approaches to statistical inference, generally termed the classical (or frequentist) approach and the Bayesian approach.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

In classical statistics, the parameter is viewed as a fixed unknown constant and inferences are made utilising the distribution f_{X}(x |θ) even after the data x has been observed. Conversely, in a Bayesian approach parameters are treated as random and so may be equipped with a probability distribution.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

In a classical approach to statistical inference, no further probabilistic assumptions are made once the parametric model E = {X, Θ, f_{X}(x |θ)} is specified. In particular, θ is treated as an unknown constant and interest centres on constructing good methods of inference

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

Intuitively, the MLE is a reasonable choice for an estimator: it’s the value of θ which makes the observed sample most likely

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

The MLE satisfies the invariance property [Theorem 7.2.10, Casella and Berger (2002)] that if is the MLE of θ then for any function g(θ), the MLE of g(θ) is g().

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

Hastie and Efron (2016) consider that there are three ages of statistical inference:

- the pre-computer age (essentially the period from 1763 and the publication of Bayes’ rule up until the 1950s),
- the early-computer age (from the 1950s to the 1990s),
- and the current age (a period of computer-dependence with enormously ambitious algorithms and model complexity).

With these developments in mind, it is clear that there exist a hierarchy of statistical models.

- Models where f
_{X}(x |θ) has a known analytic form. - Models where f
_{X}(x |θ) can be evaluated. - Models where we can simulate X from f
_{X}(x |θ).

Between the first case and the second case exist models where f_{X}(x |θ) can be evaluated up to an unknown constant, which may or may not depend upon θ. In the first case, we might be able to derive an analytic expression for θ or to prove that f_{X}(x |θ) has a unique maximum so that any numerical maximisation will converge to θ(x).

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

The choice of algorithm is critical: the MLE is a good method of inference only if:

- you can prove that it has good properties for your choice of f
_{X}(x |θ) and - you can prove that the algorithm you use to find the MLE of f
_{X}(x |θ) does indeed do this.

The second point arises once the choice of estimator has made.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

As estimator *T = t(X)* is said to be unbiased if *bias(T | θ) = E(T | θ) − θ* is zero for all θ ∈ Θ. This is a superficially attractive criterion but it can lead to unexpected results (which are not sensible estimators) even in simple cases.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

Tags

#reading

Question

As estimator *T = t(X)* is said to be unbiased if [...]. This is a superficially attractive criterion but it can lead to unexpected results (which are not sensible estimators) even in simple cases.

Answer

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

As estimator T = t(X) is said to be unbiased if bias(T | θ) = E(T | θ) − θ is zero for all θ ∈ Θ. This is a superficially attractive criterion but it can lead to unexpected results (which are not sensible estimators) even in simple cases.

* *

#reading

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

As estimator T = t(X) is said to be unbiased if bias(T | θ) = E(T | θ) − θ is zero for all θ ∈ Θ. This is a superficially attractive criterion but it can lead to unexpected results (which are not sensible estimators) even in simple cases.

* *

Tags

#reading

Question

Answer

E(T | θ) − θ

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

bias(T | θ) = E(T | θ) − θ

* *

#reading

A drawback with the bias is that it is not, in general, transformation invariant. For example, if T is an unbiased estimator of θ then T^{−1} is not, in general, an unbiased estimator of θ^{−1} as E(T^{−1} | θ) ≠ 1/E(T | θ) = θ^{−1} .

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

For an estimator T, a better criterion to being unbiased is that T has small mean square error (MSE)

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

MSE(T | θ) = Var(T | θ) + bias(T | θ)^{2} .

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

Tags

#reading

Question

[...] = Var(T | θ) + bias(T | θ)^{2} .

Answer

MSE(T | θ)

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

MSE(T | θ) = Var(T | θ) + bias(T | θ)2 .

* *

Tags

#reading

Question

MSE(T | θ) = [...] + bias(T | θ)^{2} .

Answer

Var(T | θ)

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

MSE(T | θ) = Var(T | θ) + bias(T | θ)2 .

* *

Tags

#reading

Question

MSE(T | θ) = Var(T | θ) + ^{[...]} .

Answer

bias(T | θ)^{2}

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

MSE(T | θ) = Var(T | θ) + bias(T | θ)2 .

* *

#reading

it is properties of the distribution of the estimator T, known as the sampling distribution, across the range of possible values of θ that are used to determine whether or not T is a good inference rule

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

The assessment of whether T is a good estimator is made not for the observed data x but based on the distributional properties of X

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

a key principle of the classical approach that 1. Every algorithm is certified by its sampling distribution, and 2. The choice of algorithm depends on this certification.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

If we accept, as our working hypothesis, that one of the elements in the family of distributions is true (ie: that there is a θ∗ ∈ Θ which is the true value of θ) then the corresponding predictive distribution f_{Y |X}(y |x, θ∗ ) is the true predictive distribution for Y . The classical solution is to replace θ∗ by plugging-in an estimate based on x.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

In a Bayesian approach to statistical inference, we consider that, in addition to the parametric model E = {X, Θ, f X (x |θ)}, the uncertainty about the parameter θ prior to observing X can be represented by a prior distribution π on θ.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

O’Hagan and Forster (2004; p5) note, “the posterior distribution encapsulates all that is known about θ following the observation of the data x, and can be thought of as comprising an all-embracing inference statement about θ.”

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

In contrast to the plug-in classical approach to prediction, the Bayesian approach can be viewed as integrate-out. If E B = {X × Y, Θ, f X,Y (x, y |θ), π(θ)} is our Bayesian model for (X, Y ) and we are interested in prediction for Y given X = x then we can integrate out θ to obtain the parameter free conditional distribution f Y |X (y |x): f Y |X (y |x) = Z Θ f Y |X (y |x, θ)π(θ |x) dθ.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

Whilst the posterior distribution expresses all of knowledge about the parameter θ given the data x, in order to express this knowledge in clear and easily understood terms we need to derive appropriate summaries of the posterior distribution. Typical summaries include point estimates, interval estimates, probabilities of specified hypotheses.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

#reading

E(θ |X), the posterior expectation, minimises the posterior expected square error and the minimum value of this error is Var(θ |X), the posterior variance.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

the 4-wk time horizon, beyond which the average of historical incidences (i.e., a null model) becomes the most reliable option for prediction. From a public health standpoint, a longer horizon, <span>in the order of 2 mo or more, would be particularly useful to ramp up interventions and adjust hospital surge capacity. Ideally, even longer timescales should be considered so that the prediction of epidemic intensity (epidemic size) and severity (total number of hospitalizations and deaths) aligns with

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

diction. From a public health standpoint, a longer horizon, in the order of 2 mo or more, would be particularly useful to ramp up interventions and adjust hospital surge capacity. Ideally, even <span>longer timescales should be considered so that the prediction of epidemic intensity (epidemic size) and severity (total number of hospitalizations and deaths) aligns with the vaccine manufacturing process. <span>

* *

Tags

#2018_Adalja_etal_pandemic_potential_pathogens #reading

Question

microbiologically specific diagnoses of infectious disease syndromes in strategic or sentinel locations around the world should become more routine, especially now that [...]

Answer

diagnostics are becoming more powerful and available

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

microbiologically specific diagnoses of infectious disease syndromes in strategic or sentinel locations around the world should become more routine, especially now that diagnostics are becoming more powerful and available

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

t al., 2019) makes contributions and advances in the following ways. We introduce and demonstrate the importance of discrepancy modeling to the growing and consequential ﬁeld of ﬂu forecasting. <span>Discrepancy modeling is done hierarchically, allowing information to be shared across available ﬂu seasons. We demonstrate the superiority of our approach relative to all models that competed in the CDC’s 2015–2016 and 2016–2017 ﬂu forecasting challenges, providing yet another instance where

* *

SET assigns the statement literally, LET assigns the result after executing the statement

**Statement Value of vVariable **

SET v Variable = 1 + 2; 1 + 2

LET vVariable = 1 + 2; 3

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

DUAL data type Besides the usual data types, QlikView has a data type that can be interpreted as both a number and a string—the DUAL data type. This data type is often used for months, where a month field may return both an abbreviation (Jun) and a number (6). Dual values are created using the Dual() function. For example: Dual('June', 6)

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

It is important to understand that, underneath, the DateTime data type is represented by a floating point number. For example, 12 noon on May 22nd 2012 is stored as 41,051.5. The whole number 41,051 represents the date; it is the number of days that have passed since December 31st, 1899. The fractional part 0.5 represents the time. As a day (24 hours) is 1, an hour is 1/24 and 12 hours is 12/24, which is equal to 1/2 or 0.5. Knowing this, we can use many of the numeric functions that we saw earlier to perform date and time calculations. For example, we can use the Floor()function to remove the time information from a date.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#has-images

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#has-images

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#has-images

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

#has-images

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

Hiding fields Key fields can cause confusion in the QlikView frontend. As these fields are used in multiple tables, they can return unexpected results when used in an aggregation function. It is therefore advisable to hide these fields from the frontend view. There are two variables that can be used to hide fields: HidePrefix and HideSuffix. The first variable hides all field names that start with a specific text string and the second one hides all field names that end with a specific text string. To hide our key fields, we can add the following statement at the start of our script: SET HidePrefix='%';

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

A subroutine is a reusable block of script that can be called from other places in the QlikView script by using the CALL statement. This block is formed using the SUB and END SUB control statements. Subroutines can contain parameters so that processing can be done in a flexible manner. As the QlikView script is processed in sequential order, the subroutine has to be defined before it can be called. Therefore, it is advisable to create subroutines as early as possible in the script. When executing the script, everything between the SUB and END SUB control statements is ignored by QlikView. The subroutine is only run when it is called via the CALL statement.

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

Question

For the same basic reproductive ratio and average infectious period larger values of the Gamma distribution parameter *n* lead to a [...] in prevalence and an epidemic of shorter duration.

Answer

steeper increase

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

For the same basic reproductive ratio and average infectious period larger values of the Gamma distribution parameter n lead to a steeper increase in prevalence and an epidemic of shorter duration.

* *

Question

For the same basic reproductive ratio and average infectious period, larger values of the Gamma distribution parameter *n* lead to a steeper increase in prevalence and an epidemic of [...] duration.

Answer

shorter

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

For the same basic reproductive ratio and average infectious period larger values of the Gamma distribution parameter n lead to a steeper increase in prevalence and an epidemic of shorter duration.

* *

Question

Reich et al. (2019)'s results should not be used to extrapolate hypothetical accuracy in pandemic settings, as [...]

Answer

these models were optimized specifically to forecast seasonal influenza

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Reich et al. (2019)'s results should not be used to extrapolate hypothetical accuracy in pandemic settings, as these models were optimized specifically to forecast seasonal influenza

* *

Question

[...] and mean absolute percent error are the most common metrics for forecasts of incidence (i.e., daily, weekly, or monthly incidence; peak incidence; or cumulative incidence)

Answer

mean (or median) absolute error

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

mean (or median) absolute error and mean absolute percent error are the most common metrics for forecasts of incidence (i.e., daily, weekly, or monthly incidence; peak incidence; or cumulative incidence)

* *

Question

mean (or median) absolute error and [...] are the most common metrics for forecasts of incidence (i.e., daily, weekly, or monthly incidence; peak incidence; or cumulative incidence)

Answer

mean absolute percent error

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

mean (or median) absolute error and mean absolute percent error are the most common metrics for forecasts of incidence (i.e., daily, weekly, or monthly incidence; peak incidence; or cumulative incidence)

* *

Question

More realistic distributions for the length of the infectious period can be obtained by choosing p(t) to be a [...] probability density function [22–27]

Answer

Gamma

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

More realistic distributions for the length of the infectious period can be obtained by choosing p(t) to be a gamma probability density function [22–27]

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

The branching process approximation is a CTMC, but near the disease-free equilibrium, the rates are linear (Table 2). Three important assumptions underlie the branching process approximation: Each infectious individual behavior is independent from other infectious individuals. Each infectious individua

* *

Three important assumptions underlie the branching process approximation:

- Each infectious individual behavior is independent from other infectious individuals.
- Each infectious individual has the same probability of recovery and the same probability of transmitting an infection.
- The susceptible population is sufﬁciently large.

Assumption (1) is reasonable if a small number of infectious individuals is introduced into a large homogeneously-mixed population (assumption (3)). Assumption (2) is also reasonable in a homogeneously-mixed population with constant transmission and recovery rates, b and g .

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

The branching process approximation is a CTMC, but near the disease-free equilibrium, the rates are linear (Table 2). Three important assumptions underlie the branching process approximation: Each infectious individual behavior is independent from other infectious individuals. Each infectious individual has the same probability of recovery and the same probability of transmitting an infection. The susceptible population is sufﬁciently large. Assumption (1) is reasonable if a small number of infectious individuals is introduced into a large homogeneously-mixed population (assumption (3)). Assumption (2) is also reasonable in a homogeneously-mixed population with constant transmission and recovery rates, b and g .

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

incidences). In contrast, other regions are easier to predict due to greater stability in observed historical patterns and substantial improvement of predictive models over historical averages. <span>Moving forward, it will be important to understand whether regional differences in predictive skills are a reporting artifact or whether they reflect heterogeneities in influenza transmission dynamics. Demographic and environmental differences among regions, connectivity, and spatial extent could all affect predictive skills. This question could have practical implications because regions displaying consistently high predictive power could be used as sentinels for influenza surveillance. <span>

* *

QlikView Components Instead of creating your own library of scripts, you may also want to consider QlikView Components (Qvc). Qvc is a free, open source script library. Its mission is to implement scripting best practices, improve the speed and quality of script development, and create a common ground between script developers. Qvc contains subroutines and functions to automate tasks of intermediate complexity, such as creating calendars, incremental loads, and the creation of link tables to support multiple fact tables. Qvc can be downloaded from http://code.google.com/p/ qlikview-components/

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |

* *

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | reading queue position [%] | |||

started reading on | finished reading on |