Edited, memorised or added to reading queue

on 09-Dec-2019 (Mon)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Among the 27 population-based forecasting studies, 16 used weekly predictions of weekly incidence 1 or more weeks into the future in the validation (Table 3). Nine studies predicted the timing of the epidemic peak or incidence at the peak; all performed validation using at least some forecasts made at least 4 weeks before the actual peak [10–13,16–18,29,31]. The facility-based forecasting studies used 1-step-ahead [37–39] or n-step-ahead [40] predictions of visit counts over step sizes of 1 day [40] to 1 month [39]. The regional or global pandemic spread forecasting studies used early data from the 2009 influenza A(H1N1)pdm09 pandemic to predict outcomes at national level across countries, including pandemic arrival, and peak incidence and time of peak.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




The branching process approximation is a CTMC, but near the disease-free equilibrium, the rates are linear (Table 2).

Three important assumptions underlie the branching process approximation:

  1. Each infectious individual behavior is independent from other infectious individuals. Reasonable if a small number of infectious individuals is introduced into a large homogeneously-mixed population (assumption (3)).
  2. Each infectious individual has the same probability of recovery and the same probability of transmitting an infection. Reasonable in a homogeneously-mixed population with constant transmission and recovery rates, b and g.
  3. The susceptible population is sufficiently large.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Technologies that translate raw, unprocessed data into struc- tured formats would be particularly useful. For instance, software could extract data from line lists of cases or clinical notes in electronic health records, or convert data stored in non-standard formats into machine-readable data. Digitizing handwritten text reliably, quickly and securely from clinical or epidemiological records will be a persistent need for the foreseeable future.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Over the past several years, academic research on infectious disease forecasting has grown and models have successfully generated predictions for pathogens such as influenza[19–2] , dengue[13], Zika[22], and Ebola[2]. But, scaling academic research to support public health decision-makers in real-time has received little attention and relatively scarce resources.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




In this paper, an epidemic forecast generated by a model/data-driven approach is quantified based on epidemiologically relevant features which we refer to as Epi-features. Further, the accuracy of a model’s estimate of a particular Epi-feature is quantified by evaluating its error with respect to the Epi-features extracted from the ground truth. This is enabled by using functions that capture their dissimilarity, which we refer to as error measures.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




In the case of count data, the predictive distribution is discrete which means the PIT is no longer uniform under the hypothesis of an ideal forecast
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#read
Few research funding agencies provide substantial and sustained support for this type of translational work, despite a strong track record of research productivity emerging from the CDC FluSight challenge and other governmental forecasting challenges[28]. Nor have donor foundations shown leadership in this crucial area of epidemic response. If not provided with sufficient resources, public health will remain decades behind most other sectors in its use of advanced analytics.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#read
Forecasting results must be communicated effectively to ensure they produce actionable insights. Visualizations play a key role. Academic groups have built data visualization tools to communicate forecasts[29], but these largely rely on customized code. Analysts who develop forecast models typically have limited time to spend on visualization and lack advanced design skills.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




decision makers incorporated information from infectious disease models, with early forecasts indicating that incidence would continue to grow rapidly unless aggressive interventions were implemented. For example, a forecast gener- ated by the CDC predicted up to 1.4 million Ebola cases with no additional interventions or changes in community behavior 5 . These forecasts likely contributed to the acceleration of the international response and provided guidance for how resources might be effectively deployed.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




the integration of those analyses into the decision-making cycle for the Ebola 2014–2016 epidemic was not seamless, a pattern repeated across many recent outbreaks, including Zika 6
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Modeling and outbreak data analysis efforts typically occur in silos with limited communication of methods and data between model developers and end users. Modeling “cross talk” across stakeholders within and between countries is also typically limited, often occurring within a landscape of legal and ethical uncertainty. Specifically, the ethics of performing research using surveillance and health data[7], limited knowledge of what types of questions models can help inform, data sharing restrictions[8], and the incentive in academia to quickly publish modeling results in peer-reviewed journals contribute to a complex collaborative environment with different and sometimes conflicting stakeholder goals and priorities.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




This new track of outbreak science describes the functional use of models, clinical knowledge, laboratory results, data science, statistics, and other advanced analytical methods to specifically support public health decision making between and during outbreak threats. Outbreak scientists work with decision makers to turn outbreak data into actionable information for decisions about how to anticipate the course of an outbreak, allocate scarce resources, and prioritize and implement public health interventions.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Here, we make three specific recommen- dations to get the most out of modeling efforts during outbreaks and epidemics. Together these recommendations constitute the foundation of an integrative field that is “outbreak science”:
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




we therefore recommend that epidemic modeling capability be enhanced and honed not during rapidly-evolving public health emergencies, but rather between major epidemics.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Currently, limited information is available on how those on the frontlines of public health perceive and use models in epidemic decision-making. Understanding the functional relationship between the public health end-users and model developers is critical to improving capacity during outbreaks.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Developing and implementing epidemic models under the “fog of war” is an enterprise far removed from the controlled conventional academic setting of epidemiology and biostatistics.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#reading
The goal of this paper is to demonstrate how to apply the Epi-features and error measures on the output of a fore- casting algorithm to evaluate its performance and com- pare it with other methods.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#reading

Definitions of different Epidemiologically Relevant features (Epi-features)

Peak value: Maximum number of new infected cases in a given week in the epidemic time-series

Peak time: The week when peak value is attained

Total attack rate: Fraction of individuals ever infected in the whole population

Age-specific attack rate: Fraction of individuals ever infected belonging to a specific age window

First-take-off-(value): Sharp increase in the number of new infected case counts over a few consecutive weeks

First-take-off-(time): The start time of sudden increase in the number of new infected case counts

Intensity duration: The number of weeks (usually consecutive) where the number of new infected case counts is more than a specific threshold

Speed of epidemic: Rate at which the case counts approach the peak value

Start-time of disease season: Time at which the fraction of infected individuals exceeds a specific threshold

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4667461733644

Question
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint
Answer
[default - edit me]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 4667844988172

Question
[default - edit me]
Answer
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information stor- age or retrieval system, without written permission from the publishers.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







For permission to photocopy or use material electronically from this work,
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4667874086156

Question
[...] is an essential property of a scoring rule that encourages honest and coherent predictions (Br¨ocker and Smith, 2007; Gneiting and Raftery, 2007).
Answer
Propriety

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Propriety is an essential property of a scoring rule that encourages honest and coherent predictions (Br¨ocker and Smith, 2007; Gneiting and Raftery, 2007).

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4667875659020

Question
Propriety is an essential property of a scoring rule that encourages [...] and coherent predictions (Br¨ocker and Smith, 2007; Gneiting and Raftery, 2007).
Answer
honest

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Propriety is an essential property of a scoring rule that encourages honest and coherent predictions (Br¨ocker and Smith, 2007; Gneiting and Raftery, 2007).

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4667877231884

Question
Propriety is an essential property of a scoring rule that encourages honest and [...] predictions (Br¨ocker and Smith, 2007; Gneiting and Raftery, 2007).
Answer
coherent

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Propriety is an essential property of a scoring rule that encourages honest and coherent predictions (Br¨ocker and Smith, 2007; Gneiting and Raftery, 2007).

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4667881426188

Tags
#read
Question
The [...] part of a spike-and-slab prior governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient).
Answer
spike

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The spike part of a spike-and-slab prior governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient).

Original toplevel document

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
e, by using priors on the regressor coefficients, the model incorporates uncertainties of the coefficient estimates when producing the credible interval for the forecasts. As the name suggests, <span>spike and slab priors consist of two parts: the spike part and the slab part. The spike part governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient). The slab part shrinks the non-zero coefficients toward prior expectations (often zero). To see how this works, let denote a vector of 1s and 0s where a value of 1 indicates that the variable is selected (non-zero coefficient). We can factorize the spike and slab prior as







Flashcard 4667882474764

Tags
#read
Question
The spike part of a spike-and-slab prior [...].
Answer
governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The spike part of a spike-and-slab prior governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient).

Original toplevel document

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
e, by using priors on the regressor coefficients, the model incorporates uncertainties of the coefficient estimates when producing the credible interval for the forecasts. As the name suggests, <span>spike and slab priors consist of two parts: the spike part and the slab part. The spike part governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient). The slab part shrinks the non-zero coefficients toward prior expectations (often zero). To see how this works, let denote a vector of 1s and 0s where a value of 1 indicates that the variable is selected (non-zero coefficient). We can factorize the spike and slab prior as







Flashcard 4667884571916

Tags
#read
Question
The [...] part of a spike-and-slab prior shrinks the non-zero coefficients toward prior expectations (often zero).

Answer
slab

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The slab part of a spike-and-slab prior shrinks the non-zero coefficients toward prior expectations (often zero).

Original toplevel document

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
e, by using priors on the regressor coefficients, the model incorporates uncertainties of the coefficient estimates when producing the credible interval for the forecasts. As the name suggests, <span>spike and slab priors consist of two parts: the spike part and the slab part. The spike part governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient). The slab part shrinks the non-zero coefficients toward prior expectations (often zero). To see how this works, let denote a vector of 1s and 0s where a value of 1 indicates that the variable is selected (non-zero coefficient). We can factorize the spike and slab prior as







Flashcard 4667885620492

Tags
#read
Question
The slab part of a spike-and-slab prior [...].

Answer
shrinks the non-zero coefficients toward prior expectations (often zero)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The slab part of a spike-and-slab prior shrinks the non-zero coefficients toward prior expectations (often zero).

Original toplevel document

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
e, by using priors on the regressor coefficients, the model incorporates uncertainties of the coefficient estimates when producing the credible interval for the forecasts. As the name suggests, <span>spike and slab priors consist of two parts: the spike part and the slab part. The spike part governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient). The slab part shrinks the non-zero coefficients toward prior expectations (often zero). To see how this works, let denote a vector of 1s and 0s where a value of 1 indicates that the variable is selected (non-zero coefficient). We can factorize the spike and slab prior as







Flashcard 4667887717644

Question
For [...] to photocopy or use material electronically from this work,
Answer
permission

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
For permission to photocopy or use material electronically from this work,

Original toplevel document (pdf)

cannot see any pdfs







INVESTIGATIONS ⦁ Antinuclear antibodies (positive in >95% cases): • Anticentromere autoantibodies (fewer than half of patients with limited disease) • Anti-topoisomerase (half of patients with diffuse disease) • Antinucleolar antibodies (fewer than half of patients with diffuse disease) ⦁ Electrocardiogram (ECG) – arrhythmias, conduction defects ⦁ Echocardiogram – to assess fibrosis, effusions ⦁ Thoracic high-resolution computed tomography – to assess pulmonary fibrosis ⦁ Pulmonary function tests ⦁ Biopsy and histological examination if diagnosis unclear
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




a SSM is a stochastic process that makes use of a latent variable representation to describe dynamical phenomena (Sch¨on and Lindsten, 2017). It has two components: a latent process, denoted by {Xt}t≥1 representing the underlying dynamics; and an observed process denoted by {Yt}t≥1
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




In this thesis, the state process is assumed Markovian over time, hence the subset of SSMs analysed can be also classified in the category of partially observed Markov processs (POMPs) (King, Nguyen, and Ionides, 2016) or hidden Markov models (HMMs) (Churchill, 2005). From now onwards, unless otherwise specified, SSM will refer to Markovian SSM.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




For a SSM: The state process X is often called the dynamic parameter, while θ is often referred to as static parameter(s) and the observational process Y are called the measurements.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4669528476940] #has-images
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




A SSM can also be represented by a graphical model which is a probabilistic model where a graph G = (V, E) represents the conditional independence structure (edges E) between random variables (r.v.s) (nodes V).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4669533981964] #has-images
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




State inference problems can take any of the following forms (Lindsten, 2013):

  • the distribution of of the state process at t conditionally on the data up until t, is called marginal filtering;
  • the distribution of the whole state process up until t conditionally on the data up until t, is called joint filtering and is often constructed sequentially;
  • the distribution of the state process for future intervals until time t + s, conditionally on the data up until t, is called prediction;
  • the distribution of the whole state process up until T conditionally on the full data, is called joint smoothing;
  • the distribution of the state process at t conditionally on the full data, is called marginal smoothing.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




This thesis, and this chapter, mainly addresses filtering problems. Such filtering can be approached exploiting Bayes’ theorem, conditional independence, and the Markovianity of the system.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4669543157004] #has-images #to-understand
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4669547089164] #has-images #to-understand
Where does this come from?
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Most of the results reported below can be found in common textbooks on MC and/or SMC methods (Brooks et al., 2011; Robert and Casella, 2013) as well as in key papers (Andrieu, Doucet, and Holenstein, 2010; Arulampalam et al., 2002; Gilks and Berzuini, 2001).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




All distributions in this section are assumed conditional on θ, the static parameter, and for ease of presentation, this dependence is omitted, denoting p(xt|θ) by p(xt), p(xt | y1:t, θ) by p(xt | y1:t), etc.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




SMC algorithms, such as the sequential importance sam- pler, re-sampler and the BPF, provide a method to approximate the likelihood, i.e. the data distribution conditional on a parameter value θ.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Since the focus of this section is the inference of the static parameter, θ is reintroduced in the notation below.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Sequential importance sampling (Arulampalam et al., 2002) can be used to approximate the distribution of the hidden states conditionally on the data, by decomposing the problem into simpler, lower-dimensional, approximation steps.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#to-understand
To obtain samples from this target distribution, assume that, at time t, a weighted sample n x (n) 0:t−1 , w (n) t−1 o N n=1 , of the target distribution at t − 1, p(x 0:t−1 |y 1:t−1 ), is avail- able. Letting δ a (x) define a Dirac point mass in a, the sample provides the following approximation to the target distribution at t − 1: bp(x 0:t−1 |y 1:t−1 ) = N X n=1 δ x (n) 0:t−1 (x)w (n) t−1 To propose values for the next approximation step, assume an importance distribution q(x 0:t |y 1:t ) that is factorisable as follows: q(x 0:t |y 1:t ) = q t (x t |x t−1 , y t )q(x 0:t−1 |y 1:t−1 ) = q t (x t |x t−1 , y t )q t−1 (x t−1 |x t−2 , y t−1 )q(x 0:t−2 |y 1:t−2 ) = . . . = q 0 (x 0 ) t Y s=1 q s (x s |x s−1 , y s )
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4669564914956] #has-images
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




With increasing dimensionality of the target distribution, however, the weights degenerate: a small number of particles are assigned relatively large weights and most of the particles have weight zero. To overcome weight degeneracy, resampling steps can be inserted in order to rejuvenate the sequential sample
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




The BPF is a sequential importance re-sampling algorithm where specific choices of the importance distribution (and hence of the weights) are made. This algorithm was first introduced by Stewart and McCarthy Jr (1992) and by Gordon, Salmond, and Smith (1993), taking for the first time its current name. The BPF targets the joint filtering distribution p(x0:t | y1:t).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




The key idea of the BPF algorithm is to generate a set of N particles and apply three steps sequentially (over times t = 1, 2, . . . ):

  1. resample to obtain a equally-weighted sample from the target distribution at t − 1
  2. propagate this sample from the importance distribution, chosen to be the state equation of the SSM
  3. weight the proposed sample according to the target and importance density
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4669574614284] #has-images
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4669577497868] #has-images
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4669580381452] #has-images
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




the BPF is simply a sequential importance re-sampler having the state equation as importance distribution
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Algorithm 2: Sequential Importance Re-sampling
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Algorithm 3: Bootstrap Particle Filter
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




There are many flavours that can be added to enhance the basic BPF, such as the use of auxiliary variables (Pitt and Shephard, 1999) that improves matching between the importance and target distributions.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Methods belonging to the non-Bayesian literature aim at obtaining the maximum- likelihood estimate and provide point and interval estimates. These methods include Multiple Iterated Filtering (Ionides, Bret´o, and King, 2006), as well grid-search methods.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Pseudo-marginal algorithms are aimed at exploring only the posterior distribution of the parameter, marginally from the distribution of the states, and they are based on the classical Metropolis Hastings (MH) algorithm (Hastings, 1970; Metropolis et al., 1953). Nevertheless, differently from the original MH algorithm, here the unnormalised poste- rior distribution is approximated by the product of the prior and a SMC approximation of the likelihood in the acceptance ratio.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Two algorithms are employed throughout this thesis: grouped independence Metropo- lis Hastings (GIMH) (Beaumont, 2003) and Monte Carlo within Metropolis (MCWM) (Andrieu and Roberts, 2009).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




In GIMH, at iteration i, when a new value θ' is proposed, an SMC algorithm is run to estimate the likelihood p(θ'), which is plugged into the numerator of the acceptance ratio, together with prior and proposal density. The denominator is composed of the previously-retained estimated likelihood for the initial parameter p(θi) and the respective prior and proposal density. Upon acceptance, the proposed parameter θ' and its estimated likelihood p(θ') are retained; upon rejection the old parameter θi and its likelihood p(θi) are retained.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




grouped independence Metropolis Hastings (GIMH)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Monte Carlo within Metropolis (MCWM)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4669603712268

Question
[...] (GIMH)
Answer
grouped independence Metropolis Hastings

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
grouped independence Metropolis Hastings (GIMH)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4669605285132

Question
[...] (MCWM)
Answer
Monte Carlo within Metropolis

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Monte Carlo within Metropolis (MCWM)

Original toplevel document (pdf)

cannot see any pdfs







The MCWM algorithm re-approximates the likelihood of the parameter θi when computing the acceptance ratio instead of storing it and re-using it every time.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




GIMH was proved an exact algorithm in Andrieu and Roberts (2009), targeting the exact posterior distribution.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




despite MCWM being biased for small approximation size N, this bias decreases and becomes irrelevant as N increases (McKinley et al., 2014)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




If the likelihood approximation is precise enough, GIMH and MCWM would perform equally well.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




the works purposefully represent themselves as first-person accounts of an event of significance from which an audience is supposed to learn some important information, whether the "truth" of historical events, a religious moral, or simply some lesson which was thought useful to those hearing the tales
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Truths Wrapped in Fiction: Mesopotamian Naru Literature - Ancient History Encyclopedia
ter is legendary or even fictitious. (93) Scholars continually debate whether such stories should rightly be called "Naru literature" or "fictitious autobiography" but, whichever term one uses, <span>the works purposefully represent themselves as first-person accounts of an event of significance from which an audience is supposed to learn some important information, whether the "truth" of historical events, a religious moral, or simply some lesson which was thought useful to those hearing the tales. The term "Naru literature" comes from "naru", which is explained by scholar Gerdien Jonker: The word naru is used as a name for various objects, originally boundary stones, memorial st




The word naru is used as a name for various objects, originally boundary stones, memorial stones and monuments. Two sorts of inscribed objects received the designation naru at the dawn of the second millennium: tablets accompanying presents and tablets used for building inscriptions
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Truths Wrapped in Fiction: Mesopotamian Naru Literature - Ancient History Encyclopedia
events, a religious moral, or simply some lesson which was thought useful to those hearing the tales. The term "Naru literature" comes from "naru", which is explained by scholar Gerdien Jonker: <span>The word naru is used as a name for various objects, originally boundary stones, memorial stones and monuments. Two sorts of inscribed objects received the designation naru at the dawn of the second millennium: tablets accompanying presents and tablets used for building inscriptions. At the end of the third millennium the naru chiefly played a part in religious transactions; at the beginning of the second millennium it was to become not only actually but also symbo




At the end of the third millennium the naru chiefly played a part in religious transactions; at the beginning of the second millennium it was to become not only actually but also symbolically the bearer of memory. (90)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Truths Wrapped in Fiction: Mesopotamian Naru Literature - Ancient History Encyclopedia
tones and monuments. Two sorts of inscribed objects received the designation naru at the dawn of the second millennium: tablets accompanying presents and tablets used for building inscriptions. <span>At the end of the third millennium the naru chiefly played a part in religious transactions; at the beginning of the second millennium it was to become not only actually but also symbolically the bearer of memory. (90) As the bearers of actual memory, Naru literature carried enormous significance for those who heard the tales, and this was especially true of those stories concerning the great kings of




He presents himself as having been born the illegitimate son of a priestess, set adrift on the Euphrates River soon after his birth, rescued by a gardener and then, through the help of the goddess Inanna, rising to become king of Akkad
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Truths Wrapped in Fiction: Mesopotamian Naru Literature - Ancient History Encyclopedia
count first given out in an effort to win the hearts and minds of the lower-class Sumerian populace Sargon wanted support from in conquering Sumer. Akkadian Ruler by Sumerophile (Public Domain) <span>He presents himself as having been born the illegitimate son of a priestess, set adrift on the Euphrates River soon after his birth, rescued by a gardener and then, through the help of the goddess Inanna, rising to become king of Akkad. At the time Sargon came to power in 2334 BCE, Sumer was a region which had only recently been united under the king of Umma (and later of Uruk), Lugalzagesi and, even then, it was not