Edited, memorised or added to reading queue

on 05-Sep-2023 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#feature-engineering #lstm #recurrent-neural-networks #rnn
While LSTM models take raw behavioral data as input and therefore do not rely on feature engineering or domain knowledge, our experience taught us that some fine-tuning is required to achieve optimal LSTM performance.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 7590003739916

Tags
#deep-learning #keras #lstm #python #sequence
Question
LSTM cells are comprised of [...] and gates
Answer
weights

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
LSTM cells are comprised of weights and gates

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence

The first hidden layer in the network must define the number of inputs to expect, e.g. the shape of the input layer.

Input must be three-dimensional, comprised of samples, time steps, and features in that order

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
The first hidden layer in the network must define the number of inputs to expect, e.g. the shape of the input layer. Input must be three-dimensional, comprised of samples, time steps, and features in that order. Samples. These are the rows in your data. One sample may be one sequence. Time steps. These are the past observations for a feature, such as lag variables. Features. These are columns

Original toplevel document (pdf)

cannot see any pdfs




#feature-engineering #lstm #recurrent-neural-networks #rnn
The dimensionality of the vector is often reduced through word embedding, a technique used in natural language processing
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
The dimensionality of the vector is often reduced through word embedding, a technique used in natural language processing, and with little applicability to panel data analysis. We skip this discussion in the interest of space

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7590009244940

Tags
#feature-engineering #has-images #lstm #recurrent-neural-networks #rnn
[unknown IMAGE 7103892294924]
Question
Fig. 1. Four customers with markedly different purchase patterns but [...] features in terms of recency (last purchase), frequency (number of purchases), and seniority (first purchase
Answer
identical

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Fig. 1. Four customers with markedly different purchase patterns but identical features in terms of recency (last purchase), frequency (number of purchases), and seniority (first purchase

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7590011866380

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
While LSTM models take raw behavioral data as input and therefore do not rely on feature engineering or domain knowledge, our experience taught us that some [...] is required to achieve optimal LSTM performance.
Answer
fine-tuning

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
While LSTM models take raw behavioral data as input and therefore do not rely on feature engineering or domain knowledge, our experience taught us that some fine-tuning is required to achieve optimal LSTM performance.

Original toplevel document (pdf)

cannot see any pdfs