Edited, memorised or added to reading queue

on 10-Oct-2025 (Fri)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7734851407116

Tags
#deep-learning #embeddings
Question
With the similar idea of how we get word embeddings, we can make an analogy like this: [...] is like a product; a sentence is like a sequence of ONE customer’s shopping sequence; an article is like a sequence of ALL customers’ shopping sequence
Answer
a word

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
With the similar idea of how we get word embeddings, we can make an analogy like this: a word is like a product; a sentence is like a sequence of ONE customer’s shopping sequence; an article is like a sequence of ALL customers’ shopping sequence

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7739306806540

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
The LSTM network forms a chain of repeating modules, like any RNN, but the modules, apart from the [...] recurrent function of the RNN, possess an internal recurrence (or self-loop), which lets the gradients flow for long durations without exploding or vanishing
Answer
external

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The LSTM network forms a chain of repeating modules, like any RNN, but the modules, apart from the external recurrent function of the RNN, possess an internal recurrence (or self-loop), which lets the gradients flow for long durations without exploding or vanishing

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7762378099980

Tags
#causality #statistics
Question
ignorability - we can make this assumption realistic by running [...], which force the treatment to not be caused by anything but a coin toss, so then we have the causal structure shown in Figure 2.2.
Answer
randomized experiments

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ignorability - we can make this assumption realistic by running randomized experiments, which force the treatment to not be caused by anything but a coin toss, so then we have the causal structure shown in Figure 2.2.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7762379934988

Tags
#deep-learning #keras #lstm #python #sequence
Question
Time steps. These are the past observations for a feature, such as [...] variables.
Answer
lag

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Time steps. These are the past observations for a feature, such as lag variables.

Original toplevel document (pdf)

cannot see any pdfs