Edited, memorised or added to reading queue

on 28-Oct-2025 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7734850358540

Tags
#deep-learning #embeddings
Question
With the similar idea of how we get word [...], we can make an analogy like this: a word is like a product; a sentence is like a sequence of ONE customer’s shopping sequence; an article is like a sequence of ALL customers’ shopping sequence
Answer
embeddings

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
With the similar idea of how we get word embeddings, we can make an analogy like this: a word is like a product; a sentence is like a sequence of ONE customer’s shopping sequence; an article is like a sequence of ALL customers’ shopping

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7758280002828

Tags
#causality #statistics
Question
In contrast, the non-strict causal edges assumption would allow for some parents to not be causes of their children. It would just assume that children are not causes of their parents. This allows us to draw graphs with extra edges to make fewer assumptions, just like we would in Bayesian networks, where more edges means fewer [...] assumptions.
Answer
independence

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
me that children are not causes of their parents. This allows us to draw graphs with extra edges to make fewer assumptions, just like we would in Bayesian networks, where more edges means fewer <span>independence assumptions. <span>

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence
building a deep RNN by stacking multiple recurrent hidden states on top of each other. This approach potentially allows the hidden state at each level to operate at different timescale
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
eans that the addition of layers adds levels of abstraction of input observations over time. In effect, chunking observations over time or representing the problem at different time scales. ... <span>building a deep RNN by stacking multiple recurrent hidden states on top of each other. This approach potentially allows the hidden state at each level to operate at different timescale — How to Construct Deep Recurrent Neural Networks, 2013 <span>

Original toplevel document (pdf)

cannot see any pdfs