Edited, memorised or added to reading queue

on 03-May-2024 (Fri)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

4.7.3 Tips for LSTM Input - This section lists some final tips to help you when preparing your input data for LSTMs.
#deep-learning #keras #lstm #python #sequence

4.7.3 Tips for LSTM Input

This section lists some final tips to help you when preparing your input data for LSTMs. The LSTM input layer must be 3D. The meaning of the 3 input dimensions are: samples, time steps and features. The LSTM input layer is defined by the input shape argument on the first hidden layer. The input shape argument takes a tuple of two values that define the number of time steps and features. The number of samples is assumed to be 1 or more. The reshape() function on NumPy arrays can be used to reshape your 1D or 2D data to be 3D. The reshape() function takes a tuple as an argument that defines the new shape

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Loss function for multiclass classification
#tensorflow #tensorflow-certificate
# MNIST DATA SET - multiclass classification

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=20, callbacks=[my_callback])
model.evaluate(test_images, test_labels)
# YOUR CODE ENDS HERE

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Flashcard 7625203387660

Tags
#recurrent-neural-networks #rnn
Question
In this specific domain of customer base analysis, probabilistic approaches from the [...] model family represent the gold standard, leveraging easily observable Recency and Frequency (RF, or RFM when including also the monetary value) metrics together with a latent attrition process to deliver accurate predictions (Schmittlein, Morrison, & Colombo, 1987; Fader, Hardie, & Lee, 2005; Fader & Hardie, 2009)
Answer
‘‘Buy ’Till You Die” (BTYD)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In this specific domain of customer base analysis, probabilistic approaches from the ‘‘Buy ’Till You Die” (BTYD) model family represent the gold standard, leveraging easily observable Recency and Frequency (RF, or RFM when including also the monetary value) metrics together with a latent attrition

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7625581137164

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
While an [...] can carry forward useful information from one timestep to the next, however, it is much less effective at capturing long-term dependencies (Bengio, Simard, & Frasconi, 1994; Pascanu, Mikolov, & Bengio, 2013). This limitation turns out to be a crucial problem in marketing analytics.
Answer
RNN

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
While an RNN can carry forward useful information from one timestep to the next, however, it is much less effective at capturing long-term dependencies (Bengio, Simard, & Frasconi, 1994; Pascanu

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence
Additional hidden layers can be added to a Multilayer Perceptron neural network to make it deeper. The additional hidden layers are understood to recombine the learned representation from prior layers and create new representations at high levels of abstraction. For example, from lines to shapes to objects.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Additional hidden layers can be added to a Multilayer Perceptron neural network to make it deeper. The additional hidden layers are understood to recombine the learned representation from prior layers and create new representations at high levels of abstraction. For example, from lines to shapes to objects. A sufficiently large single hidden layer Multilayer Perceptron can be used to approximate most functions. Increasing the depth of the network provides an alternate solution that require

Original toplevel document (pdf)

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
The LSTM input layer must be 3D. The meaning of the 3 input dimensions are: samples, time steps and features
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
4.7.3 Tips for LSTM Input This section lists some final tips to help you when preparing your input data for LSTMs. The LSTM input layer must be 3D. The meaning of the 3 input dimensions are: samples, time steps and features. The LSTM input layer is defined by the input shape argument on the first hidden layer. The input shape argument takes a tuple of two values that define the number of time steps and fea

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7625989557516

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question
Consumer behavior is inherently [...] which makes RNNs a perfect fit
Answer
sequential

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Consumer behavior is inherently sequential which makes RNNs a perfect fit

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7625992178956

Tags
#tensorflow #tensorflow-certificate
Question
From tensorflow version 2.7.0 model.[...]() no longer automatically upscales inputs from shape (batch_size, ) to (batch_size, 1).
Answer
fit

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Changes in tensorflow
From tensorflow version 2.7.0 model.fit() no longer automatically upscales inputs from shape (batch_size, ) to (batch_size, 1).







#Pathology
Just as identification of APC mutations in FAP provided molecular insight into the pathogenesis of the majority of sporadic colon cancers,
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs