Edited, memorised or added to reading queue

on 07-Jan-2025 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7674112118028

Tags
#tensorflow #tensorflow-certificate
Question
Confusion matrix
x-axis -> [...] label
Answer
predicted

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Confusion matrix x-axis -> predicted label

Original toplevel document

TfC_02_classification-PART_2
leads to less false negatives. Tradeoff between recall and precision. F1-score Combination of precision and recall, ususally a good overall metric for classification models. keyboard_arrow_down <span>Confusion matrix Can be hard to use whith large numbers of classes. y-axis -> true label x-axis -> predicted label # Create confusion metrics from sklearn.metrics import confusion_matrix y_preds = model_8.predict(X_test) confusion_matrix(y_test, y_preds) important: This time there is a problem with loss function. In case of categorical_crossentropy the labels have to be one-hot encoded In case of labels as integeres use SparseCategorica







Flashcard 7674113953036

Tags
#has-images #tensorflow #tensorflow-certificate
[unknown IMAGE 7626420784396]
Question

)

def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_pred): 
   """ Plots training data, testing_data """ 
   plt.figure(figsize=(10, 7)) 
   plt.scatter(train_data, train_labels, c="blue", label='Training data') 
   plt.scatter(test_data, test_labels, c="green", label="Testing data") 
   plt.scatter(test_data, predictions, c="red", label="Predictions") 
   plt.[...];

C

Answer
legend()

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
n_data, train_labels, c="blue", label='Training data') plt.scatter(test_data, test_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.<span>legend(); C <span>

Original toplevel document

TfC 01 regression
ing of a model predictions ## The 3 sets (or actually 2 sets: training and test set) tf.random.set_seed(999) X_train, X_test = tf.split(tf.random.shuffle(X, seed=42), num_or_size_splits=[40, 10]<span>) def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_pred): """ Plots training data, testing_data """ plt.figure(figsize=(10, 7)) plt.scatter(train_data, train_labels, c="blue", label='Training data') plt.scatter(test_data, test_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.legend(); Common regression evaluation metrics keyboard_arrow_down Introduction For regression problems: MAE tf.keras.losses.MAE() tf.metrics.mean_absolute_error() great starter metrics for any reg







Flashcard 7674116050188

Tags
#pytest #python #unittest
Question
assert 0.1 + 0.1 + 0.1 == 0.3, "Usual way to compare does not always work with [...]!"
Answer
floats

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
assert 0.1 + 0.1 + 0.1 == 0.3, "Usual way to compare does not always work with floats!"

Original toplevel document

Open it
Beware of float return values! 0.1 + 0.1 + 0.1 == 0.3 Sometimes false assert 0.1 + 0.1 + 0.1 == 0.3, "Usual way to compare does not always work with floats!" Instead use: assert 0.1 + 0.1 + 0.1 == pytest.approx(0.3)







Flashcard 7674118147340

Tags
#recurrent-neural-networks #rnn
Question
Each [...] is generated by drawing a sample from the multinomial output distribution calculated by the bottom network layer; our model therefore does not produce point or interval estimates, each output is a simulated draw
Answer
prediction

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Each prediction is generated by drawing a sample from the multinomial output distribution calculated by the bottom network layer; our model therefore does not produce point or interval estimates, each

Original toplevel document (pdf)

cannot see any pdfs