Edited, memorised or added to reading queue

on 10-Jan-2023 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#feature-engineering #lstm #recurrent-neural-networks #rnn
When an analyst uses feature engineering to predict behavior, the performance of the model will depend greatly on the analyst's domain knowledge, and in particular, her ability to translate that domain knowledge into relevant features
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

One Hot Encoding

For categorical variables where no such ordinal relationship exists, the integer encoding is not enough.

In fact, using this encoding and allowing the model to assume a natural ordering between categories may result in poor performance or unexpected results (predictions halfway between categories).
In this case, a one hot encoding can be applied to the integer representation. This is where the integer encoded variable is removed and a new binary variable is added for each unique integer value. In the color variable example, there are 3 categories and therefore 3 binary variables are needed. A 1 value is placed in the binary variable for the color and 0 values for the other colors.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
The LSTM expects input data to have the dimensions: samples, time steps, and features. It is the second dimension of this input format, the time steps, that defines the number of time steps used for forward and backward passes on your sequence prediction problem
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
k2 values are equal to each other and fixed. TBPTT(k1, k2), where k1=k2=k. This is realized by the fixed-sized three-dimensional input required to train recurrent neural networks like the LSTM. <span>The LSTM expects input data to have the dimensions: samples, time steps, and features. It is the second dimension of this input format, the time steps, that defines the number of time steps used for forward and backward passes on your sequence prediction problem <span>

Original toplevel document (pdf)

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

One Hot Encoding

For categorical variables where no such ordinal relationship exists, the integer encoding is not enough.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
One Hot Encoding For categorical variables where no such ordinal relationship exists, the integer encoding is not enough. In fact, using this encoding and allowing the model to assume a natural ordering between categories may result in poor performance or unexpected results (predictions halfway between cat

Original toplevel document (pdf)

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
We are employing RNNs in production now which offers significant advantages over existing methods: reduced feature engineering; improved empirical performance; and better prediction explanations
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Consumer behavior is inherently sequential which makes RNNs a perfect fit. We are employing RNNs in production now which offers significant advantages over existing methods: reduced feature engineering; improved empirical performance; and better prediction explanations

Original toplevel document (pdf)

cannot see any pdfs




#feature-engineering #lstm #recurrent-neural-networks #rnn
When an analyst uses feature engineering to predict behavior, the performance of the model will depend greatly on the analyst's domain knowledge
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
When an analyst uses feature engineering to predict behavior, the performance of the model will depend greatly on the analyst's domain knowledge, and in particular, her ability to translate that domain knowledge into relevant features

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7558912412940

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
When an analyst uses feature engineering to predict behavior, the performance of the model will depend greatly on the analyst's domain knowledge, and in particular, her ability to translate that domain knowledge into [...]
Answer
relevant features

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ature engineering to predict behavior, the performance of the model will depend greatly on the analyst's domain knowledge, and in particular, her ability to translate that domain knowledge into <span>relevant features <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7558914247948

Tags
#bayes #programming #r #statistics
Question
The [...] distribution also shows the uncertainty in that estimated slope, because the distribution shows the relative credibility of values across the continuum.
Answer
posterior

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The posterior distribution also shows the uncertainty in that estimated slope, because the distribution shows the relative credibility of values across the continuum.

Original toplevel document (pdf)

cannot see any pdfs