Edited, memorised or added to reading list

on 03-Dec-2019 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

dofloopingstatement
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#read
For example, time series analysis is frequently used to do demand forecasting for corporate planning, which requires an understanding of seasonality and trend, as well as quantifying the impact of known business drivers. But herein lies the problem: you rarely have sufficient historical data to estimate these components with good precision. And, to make matters worse, validation is more difficult for time series models than it is for classifiers and your audience may not be comfortable with the embedded uncertainty.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
me series data presents some of the most difficult analytical challenges: you typically have the least amount of data to work with, while needing to inform some of the most important decisions. <span>For example, time series analysis is frequently used to do demand forecasting for corporate planning, which requires an understanding of seasonality and trend, as well as quantifying the impact of known business drivers. But herein lies the problem: you rarely have sufficient historical data to estimate these components with good precision. And, to make matters worse, validation is more difficult for time series models than it is for classifiers and your audience may not be comfortable with the embedded uncertainty. So, how does one navigate such treacherous waters? You need business acumen, luck, and Bayesian structural time series models. In my opinion, these models are more transparent than ARIM




#read
A different approach would be to use a Bayesian structural time series model with unobserved components. This technique is more transparent than ARIMA models and deals with uncertainty in a more elegant manner. It is more transparent because its representation does not rely on differencing, lags and moving averages. You can visually inspect the underlying components of the model. It handles uncertainty in a better way because you can quantify the posterior uncertainty of the individual components, control the variance of the components, and impose prior beliefs on the model. Last, but not least, any ARIMA model can be recast as a structural model.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
, we cannot visualize the “story” of the model. All we know is that we can fit the data well using a combination of moving averages and lagged terms. A Bayesian Structural Time Series Model <span>A different approach would be to use a Bayesian structural time series model with unobserved components. This technique is more transparent than ARIMA models and deals with uncertainty in a more elegant manner. It is more transparent because its representation does not rely on differencing, lags and moving averages. You can visually inspect the underlying components of the model. It handles uncertainty in a better way because you can quantify the posterior uncertainty of the individual components, control the variance of the components, and impose prior beliefs on the model. Last, but not least, any ARIMA model can be recast as a structural model. Generally, we can write a Bayesian structural model like this: Here denotes a set of regressors, represents seasonality, and is the local level term. The local level term defines how th




#read
Note that the regressor coefficients, seasonality and trend are estimated simultaneously, which helps avoid strange coefficient estimates due to spurious relationships
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
derlying growth in the brand value of a company or external factors that are hard to pinpoint, but it can also soak up short term fluctuations that should be controlled for with explicit terms. <span>Note that the regressor coefficients, seasonality and trend are estimated simultaneously, which helps avoid strange coefficient estimates due to spurious relationships (similar in spirit to Granger causality, see 1). In addition, due to the Bayesian nature of the model, we can shrink the elements of to promote sparsity or specify outside priors for th




[unknown IMAGE 4653125602572] #has-images
General BSTS model. xt denotes a set of regressors, St represents seasonality, and μt is the local level term. The local level term defines how the latent state evolves over time and is often referred to as the unobserved trend .
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
ncertainty of the individual components, control the variance of the components, and impose prior beliefs on the model. Last, but not least, any ARIMA model can be recast as a structural model. <span>Generally, we can write a Bayesian structural model like this: Here denotes a set of regressors, represents seasonality, and is the local level term. The local level term defines how the latent state evolves over time and is often referred to as the unobserved trend. This could, for example, represent an underlying growth in the brand value of a company or external factors that are hard to pinpoint, but it can also soak up short term fluctuations th




#read
Another advantage of Bayesian structural models is the ability to use spike-and-slab priors. This provides a powerful way of reducing a large set of correlated variables into a parsimonious model, while also imposing prior beliefs on the model. Furthermore, by using priors on the regressor coefficients, the model incorporates uncertainties of the coefficient estimates when producing the credible interval for the forecasts.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
t.x=element_text(angle = -90, hjust = 0)) Here we can clearly the seasonal pattern of airline passengers as well as how the airline industry grew during this period. Bayesian Variable Selection <span>Another advantage of Bayesian structural models is the ability to use spike-and-slab priors. This provides a powerful way of reducing a large set of correlated variables into a parsimonious model, while also imposing prior beliefs on the model. Furthermore, by using priors on the regressor coefficients, the model incorporates uncertainties of the coefficient estimates when producing the credible interval for the forecasts. As the name suggests, spike and slab priors consist of two parts: the spike part and the slab part. The spike part governs the probability of a given variable being chosen for the model




#read
spike and slab priors consist of two parts: the spike part and the slab part. The spike part governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient). The slab part shrinks the non-zero coefficients toward prior expectations (often zero)
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
e, by using priors on the regressor coefficients, the model incorporates uncertainties of the coefficient estimates when producing the credible interval for the forecasts. As the name suggests, <span>spike and slab priors consist of two parts: the spike part and the slab part. The spike part governs the probability of a given variable being chosen for the model (i.e., having a non-zero coefficient). The slab part shrinks the non-zero coefficients toward prior expectations (often zero). To see how this works, let denote a vector of 1s and 0s where a value of 1 indicates that the variable is selected (non-zero coefficient). We can factorize the spike and slab prior as




#read

Bayesian structural time series models possess three key features for modeling time series data:

  • Ability to incorporate uncertainty into our forecasts so we quantify future risk
  • Transparency, so we can truly understand how the model works
  • Ability to incorporate outside information for known business drivers when we cannot extract the relationships from the data at hand
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Sorry ARIMA, but I’m Going Bayesian | Stitch Fix Technology – Multithreaded
hjust = 0)) + xlab("") + ylab("") As we can see, the posterior for unemployment.office is being forced towards 0.6 due to the strong prior belief that we imposed on the coefficient. Last Words <span>Bayesian structural time series models possess three key features for modeling time series data: Ability to incorporate uncertainty into our forecasts so we quantify future risk Transparency, so we can truly understand how the model works Ability to incorporate outside information for known business drivers when we cannot extract the relationships from the data at hand Having said that, there is no silver bullet when it comes to forecasting and scenario planning. No tool or method can remove the embedded uncertainty or extract clear signals from murky




Bayesian structural time series models are a widely useful class of time series models, known in various literatures as "structural time series," "state space models," "Kalman filter models," and "dynamic linear models," among others.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Fitting Bayesian structural time series with the bsts R package
series forecasting (Taylor and Letham 2017), and Google posting about its forecasting system in this blog (Tassone and Rohani 2017). This post summarizes the bsts R package, a tool for fitting <span>Bayesian structural time series models. These are a widely useful class of time series models, known in various literatures as "structural time series," "state space models," "Kalman filter models," and "dynamic linear models," among others. Though the models need not be fit using Bayesian methods, they have a Bayesian flavor and the bsts package was built to use Bayesian posterior sampling. The bsts package is open source.




bsts can also be configured for specific tasks by an analyst who knows whether the goal is short term or long term forecasting, whether or not the data are likely to contain one or more seasonal effects, and whether the goal is actually to fit an explanatory model, and not primarily to do forecasting at all.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Fitting Bayesian structural time series with the bsts R package
he careful treatment of holidays. There are aspects of bsts which can be similarly automated, and a specifically configured version of bsts is a powerful member of the Google ensemble. However, <span>bsts can also be configured for specific tasks by an analyst who knows whether the goal is short term or long term forecasting, whether or not the data are likely to contain one or more seasonal effects, and whether the goal is actually to fit an explanatory model, and not primarily to do forecasting at all. The workhorse behind bsts is the structural time series model. These models are briefly described in the section Structural time series models. Then the software is introduced through a




A structural time series model is defined by two equations. The observation equation relates the observed data to a vector of latent variables known as the "state." The transition equation describes how the latent state evolves through time.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Fitting Bayesian structural time series with the bsts R package
concludes with a discussion of other features in the package which we won't have space (maybe "time" is a better word) to explore with fully fleshed out examples. Structural time series models <span>A structural time series model is defined by two equations. The observation equation relates the observed data $y_t$ to a vector of latent variables $\alpha_t$ known as the "state." $$ y_t = Z_t^T\alpha_t + \epsilon_t.$$ The transition equation describes how the latent state evolves through time. $$ \alpha_{t+1} = T_t \alpha_t + R_t \eta_t.$$ The error terms $\epsilon_t$ and $\eta_t$ are Gaussian and independent of everything else. The arrays $Z_t$, $T_t$ and $R_t$ are structura




The local linear trend is a better model than the local level model if you think the time series is trending in a particular direction and you want future forecasts to reflect a continued increase (or decrease) seen in recent observations. Whereas the local level model bases forecasts around the average value of recent observations, the local linear trend model adds in recent upward or downward slopes as well. As with most statistical models, the extra flexibility comes at the price of extra volatility.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Fitting Bayesian structural time series with the bsts R package
e $x = \Delta t$, which omitted from the equation because it is always 1. The slope evolves according to a random walk, which makes the trend an integrated random walk with an extra drift term. <span>The local linear trend is a better model than the local level model if you think the time series is trending in a particular direction and you want future forecasts to reflect a continued increase (or decrease) seen in recent observations. Whereas the local level model bases forecasts around the average value of recent observations, the local linear trend model adds in recent upward or downward slopes as well. As with most statistical models, the extra flexibility comes at the price of extra volatility. The best way to understand the seasonal component $\tau_t$ is in terms of a regression with seasonal dummy variables. Suppose you had quarterly data, so that $S = 4$. You might include




the uncertainty surrounding assumptions about the distributions of the latent and infectious periods may be greater than any uncertainty that arises from noise in the data
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
the uncertainty surrounding assumptions about the distributions of the latent and infectious periods should be incorporated into quantitative predictions made from epidemiological models, especially since this may well be greater than any uncertainty that arises from noise in the data

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 4653148933388

Question
For infectious disease modelling, the uncertainty surrounding [...] may be greater than any uncertainty that arises from noise in the data
Answer
assumptions about the distributions of the latent and infectious periods

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the uncertainty surrounding assumptions about the distributions of the latent and infectious periods may be greater than any uncertainty that arises from noise in the data

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4653149981964

Question
the uncertainty surrounding assumptions about the distributions of the latent and infectious periods may be greater than any uncertainty that arises from [...]
Answer
noise in the data

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the uncertainty surrounding assumptions about the distributions of the latent and infectious periods may be greater than any uncertainty that arises from noise in the data

Original toplevel document (pdf)

cannot see any pdfs







consider the evaluation of probabilistic forecasts, or predictive distributions, for count data
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Our focus is on the low count situation in which continuum approximations fail; however, our results apply to high counts and rates as well
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Gneiting, Balabdaoui, and Raftery (2007) contend that the goal of probabilistic forecasting is to maximize the sharpness of the predictive distributions subject to calibration.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Calibration refers to the statistical consistency between the probabilistic forecasts and the observations, and is a joint property of the predictive distributions and the observations.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Sharpness refers to the concentration of the predictive distributions, and is a property of the forecasts only
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




For count data, a probabilistic forecast is a predictive probability distribution, P, on the set of the nonnegative integers.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Simplicity, generality, and interpretability are attractive features of the tools discussed by Czado et al. (2009); they apply in Bayesian or classical and parametric or nonparametric settings, and do not require models to be nested, nor be related in any way.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




When different models are based on different data, a comparison based on the AIC or related model fit criteria is not feasible here.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




In Section 2, we introduce tools for calibration and sharp- ness checks, among them a nonrandomized version of the probability integral transform (PIT) that is tailored to count data, and the marginal calibration diagram. Section 3 dis- cusses the use of scoring rules as omnibus performance mea- sures. We stress the importance of propriety (Gneiting and Raftery, 2007), relate to classical measures of predictive per- formance, and identify the predictive deviance as a variant of the proper logarithmic score. Section 4 turns to a crossvali- dation study, in which we apply these tools to critique count regression models for pharmaceutical and biomedical patents. The epidemiological case study in Section 5 evaluates the pre- dictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. The article closes with a discussion in Section 6.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Dawid (1984) proposed the use of the PIT for calibration checks. This is simply the value that the predictive CDF attains at the observation. If the observation is drawn from the predictive distribution—an ideal and desirable situation—and the predictive distribution is continuous, the PIT has a standard uniform distribution
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The PIT histogram is typically used informally as a diagnostic tool; formal tests can also be employed though they require care in interpretation (Hamill, 2001; Jolliffe, 2007).
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Deviations from uniformity in the PIT hint at reasons for forecast failures and model deficiencies. U-shaped histograms indicate underdispersed predictive distributions, hump or inverse-U shaped histograms point at overdispersion, and skewed histograms occur when central tendencies are biased
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




In the case of count data, the predictive distribution is discrete which means the PIT is no longer uniform under the hypothesis of an ideal forecast
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4653225217292] #has-images
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4653229673740] #has-images
pick the number of bins, J, compute below equation for equally spaced bins j =1,..., J , plot a histogram with height f j for bin j, and check for uniformity. Under the hypothesis of calibration, that is, if x(i) ∼ P(i) for all forecast cases i =1,..., n, it is straightforward to verify that F(u) has expectation u, so that we expect uniformity. Principled guidelines for the selection of the number of bins remain to be developed; however, J =10 or J = 20 are typical choices that lead to visually informative displays
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




We now consider what Gneiting et al. (2007) refer to as marginal calibration. The idea is straightforward: If each observed count is a random draw from the respective probabilistic forecast, and if we aggregate over the individual predictive distributions, P(i) , we expect the resulting composite distribution and the histogram of the observed counts to be statistically compatible.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Sharpness refers to the concentration of the predictive dis- tributions. In the context of prediction intervals, this can be rephrased simply: The shorter the intervals, the sharper, and the sharper the better, subject to calibration.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Prediction intervals for continuous predictive distributions are uniquely defined, and Gneiting et al. (2007) suggest to tabulate their average width, or to plot sharpness diagrams, which can be used as a diagnostic tool.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Sharpness continues to be critical for count data; however, we have found these tools to be less useful for discrete predictive distributions, for the ambiguities in specifying prediction intervals. Our preferred way of addressing sharpness is indirectly, via proper scoring rules; see below.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




U-shaped PIT histograms indicate underdispersed predictive distributions
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
Deviations from uniformity in the PIT hint at reasons for forecast failures and model deficiencies. U-shaped histograms indicate underdispersed predictive distributions, hump or inverse-U shaped histograms point at overdispersion, and skewed histograms occur when central tendencies are biased

Original toplevel document (pdf)

cannot see any pdfs




Hump or inverse-U shaped PIT histograms indicate overdispersed predictive distributions
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
Deviations from uniformity in the PIT hint at reasons for forecast failures and model deficiencies. U-shaped histograms indicate underdispersed predictive distributions, hump or inverse-U shaped histograms point at overdispersion, and skewed histograms occur when central tendencies are biased

Original toplevel document (pdf)

cannot see any pdfs




Skewed PIT histograms indicate biased central tendencies of predictive distributions
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
nt at reasons for forecast failures and model deficiencies. U-shaped histograms indicate underdispersed predictive distributions, hump or inverse-U shaped histograms point at overdispersion, and <span>skewed histograms occur when central tendencies are biased <span>

Original toplevel document (pdf)

cannot see any pdfs




Deviations from uniformity in the PIT hint at reasons for forecast failures and model deficiencies.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
Deviations from uniformity in the PIT hint at reasons for forecast failures and model deficiencies. U-shaped histograms indicate underdispersed predictive distributions, hump or inverse-U shaped histograms point at overdispersion, and skewed histograms occur when central tendencies ar

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 4654022921484

Question
U-shaped PIT histograms indicate [...] predictive distributions
Answer
underdispersed

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
U-shaped PIT histograms indicate underdispersed predictive distributions

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4654134332684

Question
[...] PIT histograms indicate underdispersed predictive distributions
Answer
U-shaped

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
U-shaped PIT histograms indicate underdispersed predictive distributions

Original toplevel document (pdf)

cannot see any pdfs







The 68000 is no more difficult to program than any of the 8-bit chips. It just has more depth and more capability than those chips. Sure, to get the full use of the 68000 you must understand ad- vanced concepts such as frame pointers, supervisor mode, and memory management, but you don't have to use them.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The 68000 can be used for simple programs just as an 8-bit chip can. But if you have a complicated data structure or routine to program, the 68000 will make your life easier because it provides you with more tools for implementing such things. An 8-bit chip makes you do all the work with long sequences of simple instructions: the 68000 lets you use just a few, more powerful instructions. Many functions that took planning and programming on an 8-bit chip are reduced to a single, automatic operation on the 68000. This process appears in many technologies and is particularly strong in microelec- tronics. While the chips become more powerful, us- ing them doesn't get more difficult (which is nice because we aren't getting smarter). The elemental functions of the chips just keep advancing. Multiply- ing I6-bit numbers or setting up a portion of the stack for a subroutine was a major programming task on a 4-bit microprocessor and required a
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4654370524428

Question
[...] PIT histograms indicate overdispersed predictive distributions
Answer
Hump or inverse-U shaped

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Hump or inverse-U shaped PIT histograms indicate overdispersed predictive distributions

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4654371573004

Question
Hump or inverse-U shaped PIT histograms indicate [...] predictive distributions
Answer
overdispersed

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Hump or inverse-U shaped PIT histograms indicate overdispersed predictive distributions

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4654373670156

Question
[...] PIT histograms indicate biased central tendencies of predictive distributions
Answer
Skewed

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Skewed PIT histograms indicate biased central tendencies of predictive distributions

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 4654375243020

Question
Skewed PIT histograms indicate [...] predictive distributions
Answer
biased central tendencies of

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Skewed PIT histograms indicate biased central tendencies of predictive distributions

Original toplevel document (pdf)

cannot see any pdfs







Scoring rules provide summary measures in the evaluation of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and the observation.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Suppose, then, that the forecaster’s best judgment is the predictive distribution Q. The forecaster has no incentive to predict any P = Q, and is encouraged to quote her true belief, P = Q, if s(Q, Q) ≤ s(P, Q), (4) with equality if and only if P = Q. A scoring rule with this property is said to be strictly proper. If s(Q, Q) ≤ s(P , Q) for all P and Q, the scoring rule is said to be proper.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Propriety is an essential property of a scoring rule that encourages honest and coherent predictions (Br¨ocker and Smith, 2007; Gneiting and Raftery, 2007).
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Strict propriety ensures that both calibration and sharpness are being addressed by the prediction (Winkler, 1996).
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




In fact, the 68000 is more than a 16-bit microprocessor. Many of its features handle 32-bits at a time. The 68000 family-which is also de- scribed in this book-is a set of chips that includes the 68008 (found in inexpensive home computers) and the 68020 (a full 32-bit chip that is found in super-minicomputers). Learn about the 68000 and you will know most of the details of these chips, too.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The logarithmic score, logs, is defined as logs(P, x) = −log px. This is the only proper scoring rule that depends on the pre- dictive distribution P only through the probability mass p x at the observed count (Good, 1952).
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The Brier score, qs, is defined as qs(P, x) = −2px + ||p||2
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The spherical score is defined as sphs(P, x) =−px/||p||
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 4654394379532] #has-images
Definition of the ranked probabiilty score [Epstein 1969]
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The ranked probability score generalizes the absolute error, to which it reduces if P is a point forecast.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The scores introduced in this section are strictly proper, except that the ranked prob- ability score requires Q to have finite first moment for strict inequality in equation (4) to hold
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




there is a distinct difference between the ranked probability score and the other scores discussed in this section, in that the former blows up score differentials between competing forecasters in cases in which predicted and/or observed counts are unusually high (Candille and Talagrand, 2005)
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Viewed as a scoring rule for probabilistic forecasts, the mean squared error of the predictive distribution's mean is proper, but not strictly proper (Gneiting and Raftery, 2007).
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




It has frequently been argued that the squared Pearson residual or normalized squared error score ought be approximately one when averaged over the predictions (Carroll and Cressie, 1997; Liesenfeld et al., 2006).
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




There are some advanced details of the 68000 that this book doesn't attempt to cover, including such things as the exact timing of instructions, the interfacing of peripherals, and the algorithms for assembly language subroutines. Those subjects are better left to the original literature from the chip- maker or a book dedicated to this subject.
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




יקלח ןיד קספ לע רערעל ןתינ אליפוסה ןידה קספ לע רוערעה תרגסמב 3 795 ךשמהב הילע רערעל ןתינ אל )תושרב( הירחאל דימ םיכילה בוכיע רבדב הטלחה לע רוערע שגוה אל םא
statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 4659042454796

Question
In AWS, Application Load Balancers support intelligent routing (based on URL paths, http headers, etc), and based on these matching conditions send the requests to specific [...] [...] s of EC2 instances (or lambdas, non-EC2 servers by IP, etc).
Answer
Target Groups

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659044551948

Question
In AWS, when you first create an Elastic Load Balancer (either Application Load Balancer or Classic Load Balancer), you add a [...] [...] that will run against all the attached instances (or instances within target groups in case of ALBs) to see if each instance is OutOfService or InService.
Answer

Health Check

^^ the Health check will take attributes like a path (for e.g. "/index.html"), how often to check that endpoint/path, timeout settings etc


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659046649100

Question
In AWS, when you create an Elastic Load Balancer, one or more load balancer [...] s, each with a specific IP, are created in one or more subnets that you specify at ELB creation time.
Answer

Nodes

^^ If the ELB is internet-facing, the ELBs dns will resolve to the external IPs of the nodes, so they can be reached from internet, and if ELB is internal, the ELBs dns will resolve to the internal IPs of the nodes, so they are only reachable from inside the VPC


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659048746252

Question
In AWS, when you create an Elastic Load Balancer, one or more load balancer nodes, each with a specific IP, are created in one or more subnets that you specify at ELB creation time. If the ELB is internet-facing, the elb dns will resolve to the [...] IPs of the nodes, so they can be reached from internet, and if ELB is internal, the elb dns will resolve to the [...] IPs of the nodes, so they are only reachable from inside the VPC <-- Two different occulsions
Answer
Public / Private

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659050843404

Question
In AWS, both internet-facing and internal load balancers route requests to your targets (e.g. EC2 instances) using [...] IP addresses, so your targets do not need [...] IP addresses to receive requests from an internal or an internet-facing load balancer. <-- Two different occulsions
Answer
Private / Public

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659052940556

Question
In AWS, for ELBs (Elastic Load Balancers), you would have an [...]-[...] load balancer for your web servers, while having an [...] load balancer for your database servers.
Answer
Internet Facing (or External) / Internal

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659055037708

Question
In AWS, if you get 504 error it would indicate a problem with your [...] instead of the ELB (Elastic Load Balancer). <-- Bonus: thisk of what a 504 error is
Answer

[EC2 instance/application]

^^ 504 is gateway timeout


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659057134860

Question
In AWS, and networking/http in gerneral, what is the X-Forwarded-For header used for? Think of an example where you have a user request that goes from user to your ELB to your EC2 instance.
Answer
X-Forwarded-For head has the IP info of original sender, so when EC2 gets request from ELB, it knows original users/client IP, not just the internal IP of the ELB sending it the request.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659059232012

Question
In AWS, for ELBs, what are Sticky sessions (that you can disable/enable at ELB level) and why are they useful?
Answer
When enabled, Sticky sessions ensure all traffic from a user during a session go to same EC2 instance (or Target Group, for ALBs). This is useful if user data is being saves to a file on the EC2 instnaces for example, so for the whole session traffic from that user must go to same EC2 instance.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 4659061329164

Question
In AWS, for ELBs, [...] [...] load balancing, allow elb nodes to send traffic to resources in other zones (not just their own zone) so better balancing the traffic
Answer

Cross Zone Load Balancing

^^ So if you have 2 instances in Zone/subnet A and 4 in Zone/Subnet B, when ELB DNS gets the requests, it will send first one to Zone A ELB node, and second one to zone B ELB node, but each node can go cross zone so that the Zone A EC2 instances do not get overwhelmed (since there are just 2 of them).


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill