Edited, memorised or added to reading queue

on 29-Sep-2025 (Mon)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7750842715404

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question

The learning mechanism of the recurrent neural network thus involves:

(1) the forward propagation step where the [...] loss is calculated;

Answer
cross- entropy

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The learning mechanism of the recurrent neural network thus involves: (1) the forward propagation step where the cross- entropy loss is calculated;

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7757344935180

Tags
#deep-learning #keras #lstm #python #sequence
Question
Standardizing a dataset involves rescaling the distribution of values so that [...] of observed values is 0 and the standard deviation is 1
Answer
the mean

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Standardizing a dataset involves rescaling the distribution of values so that the mean of observed values is 0 and the standard deviation is 1

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7760826469644

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question
Instead of [...] timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t.
Answer
absolute

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Instead of absolute timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t.

Original toplevel document (pdf)

cannot see any pdfs







#recurrent-neural-networks #rnn
frequently contended disadvantages are disappearing: Computational power is more affordable and efficient training methods are advancing at a fast pace, which also facilitates the adaptive fine-tuning of model parameters once ”new” transaction data accrues, and datasets of historical customer transaction records are more commonly available, larger, and more detailed with observed behaviour across diverse contexts and platforms.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
arning models of customer behavior remains their opaque nature and the lack of simple ways to interpret their behavior, which is especially true for the complex temporal dynamics of RNNs. Other <span>frequently contended disadvantages are disappearing: Computational power is more affordable and efficient training methods are advancing at a fast pace, which also facilitates the adaptive fine-tuning of model parameters once ”new” transaction data accrues, and datasets of historical customer transaction records are more commonly available, larger, and more detailed with observed behaviour across diverse contexts and platforms. Furthermore, the skills required to build such models are becoming widespread, thanks to the mature open source programming tools and burgeoning research community. Deep neural networks

Original toplevel document (pdf)

cannot see any pdfs




#tensorflow #tensorflow-certificate
For imbalanced class problems. Higher precision leads to less false positives
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Precision For imbalanced class problems. Higher precision leads to less false positives.

Original toplevel document

TfC_02_classification-PART_2
Classification evaluation methods Accuracy tf.keras.metrics.Accuracy() sklearn.metrics.accuracy_score() Not the best for imbalanced classes Precision For imbalanced class problems. Higher precision leads to less false positives. Recall Higher recall leads to less false negatives. Tradeoff between recall and precision. F1-score Combination of precision and recall, ususally a good overall metric for classificatio




Flashcard 7760833547532

Tags
#tensorflow #tensorflow-certificate
Question
For imbalanced class problems. Higher [...] leads to less false positives
Answer
precision

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
For imbalanced class problems. Higher precision leads to less false positives

Original toplevel document

TfC_02_classification-PART_2
Classification evaluation methods Accuracy tf.keras.metrics.Accuracy() sklearn.metrics.accuracy_score() Not the best for imbalanced classes Precision For imbalanced class problems. Higher precision leads to less false positives. Recall Higher recall leads to less false negatives. Tradeoff between recall and precision. F1-score Combination of precision and recall, ususally a good overall metric for classificatio