Edited, memorised or added to reading queue

on 30-Apr-2024 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7624642137356

Tags
#tensorflow #tensorflow-certificate
Question

#MNIST DATA SET - 9-class classification problem
model.compile(optimizer='adam', loss='[...]', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=20, callbacks=[my_callback])
model.evaluate(test_images, test_labels)

Answer
sparse_categorical_crossentropy

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Loss function for multiclass classification
# YOUR CODE STARTS HERE #MNIST DATA SET model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.summary() model.fit(training_images, training_labels, epochs=20, callbacks=[my_callback]) model.evaluate(test_images, test_labels) # YOUR CODE ENDS HERE </







Flashcard 7625620720908

Tags
#tensorflow #tensorflow-certificate
Question
y_test.shape, y_pred.shape, y_pred.reshape([...]).shape, tf.squeeze(y_pred).shape


(TensorShape([10]), (10, 1), (10,), TensorShape([10]))

Answer
(10, )

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Different shapes of the tensors
y_test.shape, y_pred.shape, y_pred.reshape((10, )).shape, tf.squeeze(y_pred).shape (TensorShape([10]), (10, 1), (10,), TensorShape([10]))







Flashcard 7625630420236

Tags
#tensorflow #tensorflow-certificate
Question
# Calculate MSE "by hand" in steps - identify functions

abs_err = tf.abs([...](tf.cast(y_test, dtype=tf.float32), tf.squeeze(y_pred)))
sq_abs_err = tf.multiply(abs_err, abs_err)
sq_abs_err
tf.math.reduce_mean(sq_abs_err)



<tf.Tensor: shape=(), dtype=float32, numpy=155.11417>

Answer
tf.subtract

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Calculate MSE &quot;by hand&quot; in steps - identify functions
# Calculate MSE "by hand" in steps - identify functions abs_err = tf.abs(tf.subtract(tf.cast(y_test, dtype=tf.float32), tf.squeeze(y_pred))) sq_abs_err = tf.multiply(abs_err, abs_err) sq_abs_err tf.math.reduce_mean(sq_abs_err) <tf.Tensor: shape=(), dtype=float32, num







Flashcard 7625774599436

Tags
#deep-learning #embeddings
Question
researchers have found the [...] can be also used in other domains like search and recommendations, where we can put latent meanings into the products to train the machine learning tasks through the use of neural networks
Answer
embedding

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
researchers have found the embedding can be also used in other domains like search and recommendations, where we can put latent meanings into the products to train the machine learning tasks through the use of neural netwo

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence
The goal of the backpropagation training algorithm is to modify the weights of a neural network in order to minimize the error of the network outputs compared to some expected output in response to corresponding inputs.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
The goal of the backpropagation training algorithm is to modify the weights of a neural network in order to minimize the error of the network outputs compared to some expected output in response to corresponding inputs. It is a supervised learning algorithm that allows the network to be corrected with regard to the specific errors made. The general algorithm is as follows: 1. Present a training input p

Original toplevel document (pdf)

cannot see any pdfs