Do you want BuboFlash to help you learning these things? Click here to log in or create user.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

In simple exponential smoothing satisfying the formula: \hat{x}_{n+1} = αx n + (1 − α)\hat{x}_n, the lower the value of α, the smoother the forecasts will be because they are not aﬀected much by recent values.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

step ahead forecast error at time t, which is denoted e t , is the diﬀerence between the observed value and the 1-step ahead forecast of X t : e t = x t - \(\hat{x}_t\) The sum of squared errors, or SSE, is given by SSE <span>= \(\large \sum_{t=t}^ne_t^2 = \sum_{t=t}^n(x_t-\hat{x}_t)^2\) Given observed values x 1 ,x 2 ,...,x n ,the optimal value of the smoothing parameter α for simple exponential smoothing is the value that minimizes the sum of squared errors.<

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | suggested re-reading day | |||

started reading on | finished reading on |

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Suppose that the time series X t can be described by an additive non-seasonal model with a linear trend component, that is, X t = m + b t + W t , where b is the slope of the trend component m t = m + bt. Note that X t+1 = m + b(t +1) + W t+1 =(m + bt) + b + W t+1 = m t + b + W t+1 <

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Suppose that the time series X t can be described by an additive non-seasonal model with a linear trend component, that is, X t = m + b t + W t , where b is the slope of the trend component m t = m + bt. Note that X t+1 = m + b(t +1) + W t+1 =(m + bt) + b + W t+1 = m t + b + W t+1

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Suppose that the time series X t can be described by an additive non-seasonal model with a linear trend component, that is, X t = m + bt + W t , where b is the slope of the trend component m t = m + bt. Note that X t+1 = m + b(t +1) + W t+1 =(m + bt) + b + W t+1 = m t + b + W t+1

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

pose that the time series X t can be described by an additive non-seasonal model with a linear trend component, that is, X t = m + bt + W t , where b is the slope of the trend component m t = m + bt. Note that X t+1 = m + b<span>(t +1) + W t+1 =(m + bt) + b + W t+1 = m t + b + W t+1 <span><body><html>

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

can be described by an additive non-seasonal model with a linear trend component, that is, X t = m + bt + W t , where b is the slope of the trend component m t = m + bt. Note that X t+1 = m + b(t +1) + W t+1 =<span>(m + bt) + b + W t+1 = m t + b + W t+1 <span><body><html>

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

n-seasonal model with a linear trend component, that is, X t = m + bt + W t , where b is the slope of the trend component m t = m + bt. Note that X t+1 = m + b(t +1) + W t+1 =(m + bt) + b + W t+1 = <span>m t + b + W t+1 <span><body><html>

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | suggested re-reading day | |||

started reading on | finished reading on |

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

simple exponential smoothing: the term exponential refers to the fact that the weights α(1 − α) i lie on an exponential curve.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

If a time series X t is described by an additive model with constant level and no seasonality, 1-step ahead forecasts may be obtained by simple exponential smoothing using the formula \(\hat{x}_{n+1}\)= αx n + (1 − α)\(\hat{x}_n\) where: x n is the observed value at time n, \(\hat{x}_n\)and \(\hat{x}_{n+1}\)are the 1-step ahead forecasts of X n and X n+1 , and α is a smoothing parameter, 0 ≤ α ≤ 1.