Do you want BuboFlash to help you learning these things? Click here to log in or create user.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

data, it is unrealistic to assume that the treatment groups are exchangeable. In other words, there is no reason to expect that the groups are the same in all relevant variables other than the <span>treatment. <span>

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

We denote by π(1) the potential outcome of happiness you would observe if you were to get a dog ( π = 1 )

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

lly estimate quantities such as πΌ π [ πΌ[π | π = 1, π] β πΌ[π | π = 0, π] ] ? We will often use a model (e.g. linear regression or some more fancy predictor from machine learning) in place of the <span>conditional expectations πΌ[π | π = π‘, π = π₯] . We will refer to estimators that use models like this as model-assisted estimators. Now that weβve gotten some of this terminology out of the way, we can proceed t

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

cal Markov assumption would tell us that we can factorize π(π₯, π¦) as π(π₯)π(π¦|π₯) , but it would also allow us to factorize π(π₯, π¦) as π(π₯)π(π¦) , meaning it allows distributions where π and π are <span>independent. In contrast, the minimality assumption does not allow this additional independence <span>

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

The causal graph for interventional distributions is simply the same graph that was used for the observational joint distribution, but with all of the edges to the intervened node(s) removed.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Rather, my outcome is only a function of my own treatment. Weβve been using this assumption implicitly throughout this chapter. Weβll now formalize it. Assumption 2.4 (No Interference) π π (π‘ 1 , . . . , π‘ πβ1 , π‘ π , π‘ π+1 , . . . , π‘ π ) = π

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | suggested re-reading day | |||

started reading on | finished reading on |

Generating synthetic time-series and sequential data is more challenging than tabular data where normally all the information regarding one individual is stored in a single row. In sequential data, information can be spread through many rows, like credit card transactions, and preservation of correlations between rows β the events β and columns β the variables is key. Furthermore, the length of the sequences is variable; some cases may comprise just a few transactions while others may have thousands.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

When we say βidentificationβ in this book, we are referring to the process of moving from a causal estimand to an equivalent statistical estimand

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

We refer to the flow of association along directed paths as causal association

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Definition 3.4 (d-separation) Two (sets of) nodes π and π are d-separated by a set of nodes π if all of the paths between (any node in) π and (any node in) π are blocked by π Source: Pearl (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Regular Bayesian networks are purely statistical models, so we can only talk about the flow of association in Bayesian networks.

status | not learned | measured difficulty | 37% [default] | last interval [days] | |||
---|---|---|---|---|---|---|---|

repetition number in this series | 0 | memorised on | scheduled repetition | ||||

scheduled repetition interval | last repetition or drill |

Positivity is the condition that all subgroups of the data with different covariates have some probability of receiving any value of treatment

status | not read | reprioritisations | ||
---|---|---|---|---|

last reprioritisation on | suggested re-reading day | |||

started reading on | finished reading on |

ty, difficulty, interval, recency, text size, etc. The review may also be semantic or neural where connections between elements determine the sequence of review. Review types Search and review S<span>earch and review in SuperMemo is a review of a subset of elements that contain a given search phrase. For example, before an exam in microbiology, a student may wish to review all his knowledge of viruses using the following method: search for all elements containing the phrase virus (e.g. with Ctrl+F) review all those elements (e.g. with Ctrl+Shift+L) The review may include all subset elements (e.g. Learning : Rev