Edited, memorised or added to reading queue

on 09-Dec-2025 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7775822679308

Tags
#deep-learning #keras #lstm #python #sequence
Question
Increasing the [...] of the network provides an alternate solution that requires fewer neurons and trains faster
Answer
depth

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Increasing the depth of the network provides an alternate solution that requires fewer neurons and trains faster

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7775826087180

Tags
#tensorflow #tensorflow-certificate
Question

[...] function:

  • In case of categorical_crossentropy the labels have to be one-hot encoded

Answer
loss

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
loss function: In case of categorical_crossentropy the labels have to be one-hot encoded

Original toplevel document

TfC_02_classification-PART_2
y-axis -> true label x-axis -> predicted label # Create confusion metrics from sklearn.metrics import confusion_matrix y_preds = model_8.predict(X_test) confusion_matrix(y_test, y_preds) <span>important: This time there is a problem with loss function. In case of categorical_crossentropy the labels have to be one-hot encoded In case of labels as integeres use SparseCategoricalCrossentropy # Get the patterns of a layer in our network weights, biases = model_35.layers[1].get_weights() <span>







Flashcard 7775827660044

Tags
#deep-learning #keras #lstm #python #sequence
Question
Long Short-Term Memory (LSTM) is an RNN architecture specifically designed to address the [...] problem
Answer
vanishing gradient

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Long Short-Term Memory (LSTM) is an RNN architecture specifically designed to address the vanishing gradient problem

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7775829495052

Tags
#recurrent-neural-networks #rnn
Question
The name, often shortened to [...], comes from the fact that these models can translate a sequence of input elements into a sequence of outputs.
Answer
seq2seq

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The name, often shortened to seq2seq, comes from the fact that these models can translate a sequence of input elements into a sequence of outputs.

Original toplevel document (pdf)

cannot see any pdfs