Edited, memorised or added to reading queue

on 27-Apr-2025 (Sun)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7698524015884

Tags
#recurrent-neural-networks #rnn
Question
[...] business settings (i.e., when the time at which a customer becomes inactive is unobserved by the firm)
Answer
non-contractual

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
non-contractual business settings (i.e., when the time at which a customer becomes inactive is unobserved by the firm)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7698526113036

Tags
#deep-learning #keras #lstm #python #sequence
Question
If your problem looks like a traditional [...] type problem with the most relevant lag observations within a small window, then perhaps develop a baseline of performance with an MLP and sliding window before considering an LSTM
Answer
autoregression

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
If your problem looks like a traditional autoregression type problem with the most relevant lag observations within a small window, then perhaps develop a baseline of performance with an MLP and sliding window before considering an LSTM </sp

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence

3 common examples for managing state:

  • A long sequence was split into multiple subsequences (many samples each with many time steps). State should be reset after the network has been exposed to the entire sequence by making the LSTM stateful, turning off the shuffling of subsequences, and resetting the state after each epoch
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
To make this more concrete, below are a 3 common examples for managing state: A prediction is made at the end of each sequence and sequences are independent. State should be reset after each sequence by setting the batch size to 1. A long sequence was split into multiple subsequences (many samples each with many time steps). State should be reset after the network has been exposed to the entire sequence by making the LSTM stateful, turning off the shuffling of subsequences, and resetting the state after each epoch. A very long sequence was split into multiple subsequences (many samples each with many time steps). Training efficiency is more important than the influence of long-term internal state

Original toplevel document (pdf)

cannot see any pdfs