Edited, memorised or added to reading queue

on 22-Jan-2026 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7789314968844

Tags
#DAG #causal #edx
Question
For example, suppose L is fetal death. We don't know the true causal DAG, we propose seven causal DAGs. Suppose that L does not help block a backdoor path in any of the seven DAGs, then we [...] for L, even if L were strongly associated with A and Y.
Answer
will not adjust

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
pan> For example, suppose L is fetal death. We don't know the true causal DAG, we propose seven causal DAGs. Suppose that L does not help block a backdoor path in any of the seven DAGs, then we <span>will not adjust for L, even if L were strongly associated with A and Y. <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7789318901004

Tags
#causality #statistics
Question
causal edges assumption, endows [...] paths with the unique role of carrying causation along them.
Answer
directed

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
causal edges assumption, endows directed paths with the unique role of carrying causation along them.

Original toplevel document (pdf)

cannot see any pdfs







(L-DNN), were inspired by brain neurophysiology. These deep learning algorithms separate feature training and rule training and are able to add new rule information on the fly.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
(L-DNN), were inspired by brain neurophysiology. These deep learning algorithms separate feature training and rule training and are able to add new rule information on the fly. While they still learn features slowly using a large and balanced data set, L-DDNs don't learn rules at this stage. And they don't need images of all known valve defects—the dataset can

Original toplevel document

Deep Learning Has Reinvented Quality Control in Manufacturing—but It Hasn’t Gone Far Enough AI systems that make use of “lifelong learning” techniques are more flexible and faster to train
These so-called continual or lifelong learning systems, and in particular lifelong deep neural networks (L-DNN), were inspired by brain neurophysiology. These deep learning algorithms separate feature training and rule training and are able to add new rule information on the fly. While they still learn features slowly using a large and balanced data set, L-DDNs don't learn rules at this stage. And they don't need images of all known valve defects—the dataset can be relatively generic as long as the objects possess similar features (such as curves, edges, surface properties). With L-DNNs, this part of model creation can be done once, and without the help of the manufacturers. What our hypothetical valve manufacturer needs to know is this: After the first step of feature learning is completed, they need only provide a small set of images of good valves for the system to learn a set of rules that define a good valve. There's no need to provide any images of defective valves. L-DNNs will learn on a single presentation of a small dataset using only “good" data (in other words, data about good ventilator valves), and then advise the user when an atypical product is encountered. This method is akin to the process humans use to spot differences in objects they encounter every day—an effortless task for us, but a very hard one for deep learning models until L-DNN systems came along. Rather than needing thousands of varied images, L-DNNs only require a handful of images to train and build a prototypical understanding of the object. The system can be deployed in seco




Flashcard 7789324143884

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question
Instead of absolute [...], the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t.
Answer
timestamps

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Instead of absolute timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7789325978892

Tags
#pytest #python #unittest
Question
Beware of float return values!
0.1 + 0.1 + 0.1 == 0.3 Sometimes [...]
Answer
false

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Beware of float return values! 0.1 + 0.1 + 0.1 == 0.3 Sometimes false

Original toplevel document

Open it
Beware of float return values! 0.1 + 0.1 + 0.1 == 0.3 Sometimes false assert 0.1 + 0.1 + 0.1 == 0.3, "Usual way to compare does not always work with floats!" Instead use: assert 0.1 + 0.1 + 0.1 == pytest.approx(0.3)







Flashcard 7789327551756

Tags
#pytest #python #unittest
Question
Beware of [...] return values!
0.1 + 0.1 + 0.1 == 0.3 Sometimes false
Answer
float

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Beware of float return values! 0.1 + 0.1 + 0.1 == 0.3 Sometimes false

Original toplevel document

Open it
Beware of float return values! 0.1 + 0.1 + 0.1 == 0.3 Sometimes false assert 0.1 + 0.1 + 0.1 == 0.3, "Usual way to compare does not always work with floats!" Instead use: assert 0.1 + 0.1 + 0.1 == pytest.approx(0.3)