Edited, memorised or added to reading queue

on 09-Jan-2025 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#tensorflow #tensorflow-certificate

Preprocessing steps (preparing data for neural networks):

  1. Turn all data into numbers
  2. Make sure your tensors are in the right shape
  3. Scale features (normalize or standardize) Neural networks tend to prefer normalization.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Preprocessing data (normalization and standardization) Preprocessing steps: Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization. Normalization - adjusting values measured on different scales to a notionally common scale

Original toplevel document

TfC_01_FINAL_EXAMPLE.ipynb
ape # Create training and test datasets #my way: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42) <span>Preprocessing data (normalization and standardization) Preprocessing steps: Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization. Normalization - adjusting values measured on different scales to a notionally common scale Normalization # Start from scratch import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf ## Borrow a few classes from sci-kit learn from sklearn.compose import mak




#tensorflow #tensorflow-certificate

Scale features (normalize or standardize)

Neural networks tend to prefer normalization.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Preprocessing data (normalization and standardization) Preprocessing steps: Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization. Normalization - adjusting values measured on different scales to a notionally common scale

Original toplevel document

TfC_01_FINAL_EXAMPLE.ipynb
ape # Create training and test datasets #my way: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42) <span>Preprocessing data (normalization and standardization) Preprocessing steps: Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization. Normalization - adjusting values measured on different scales to a notionally common scale Normalization # Start from scratch import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf ## Borrow a few classes from sci-kit learn from sklearn.compose import mak




#tensorflow #tensorflow-certificate

Bag of tricks to improve model

3. Fit the model - more epochs, more data examples

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Bag of tricks to improve model Create model - more layers, more neurons, different activation Compile mode - other loss, other optimizer, change optimizer parameters Fit the model - more epochs, more data examples

Original toplevel document

TfC_02_classification-PART_1
nse(10, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.binary_crossentropy, metrics=['accuracy']) <span>Bag of tricks to improve model Create model - more layers, more neurons, different activation Compile mode - other loss, other optimizer, change optimizer parameters Fit the model - more epochs, more data examples # plots model predictions agains true data import numpy as np def plot_decision_boundry(model, X, y): """ Take in a trained model, features and labels and create numpy.meshgrid of the d




Flashcard 7674290638092

Tags
#tensorflow #tensorflow-certificate
Question

Scale features (normalize or standardize)

Neural networks tend to prefer [...].

Answer
normalization

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Scale features (normalize or standardize) Neural networks tend to prefer normalization.

Original toplevel document

TfC_01_FINAL_EXAMPLE.ipynb
ape # Create training and test datasets #my way: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42) <span>Preprocessing data (normalization and standardization) Preprocessing steps: Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization. Normalization - adjusting values measured on different scales to a notionally common scale Normalization # Start from scratch import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf ## Borrow a few classes from sci-kit learn from sklearn.compose import mak







Flashcard 7674293259532

Tags
#tensorflow #tensorflow-certificate
Question

Preprocessing steps (preparing data for neural networks):

  1. Turn all data into [...]
  2. Make sure your tensors are in the right shape
  3. Scale features (normalize or standardize) Neural networks tend to prefer normalization.
Answer
numbers

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Preprocessing steps (preparing data for neural networks): Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization.

Original toplevel document

TfC_01_FINAL_EXAMPLE.ipynb
ape # Create training and test datasets #my way: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42) <span>Preprocessing data (normalization and standardization) Preprocessing steps: Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization. Normalization - adjusting values measured on different scales to a notionally common scale Normalization # Start from scratch import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf ## Borrow a few classes from sci-kit learn from sklearn.compose import mak







Flashcard 7674295094540

Tags
#has-images #tensorflow #tensorflow-certificate
[unknown IMAGE 7626420784396]
Question
[...]
  • tf.keras.losses.MSE()
  • tf.metrics.mean_square_error()
  • when larger errors are more significant that smaller errors
Answer
MSE

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
MSE tf.keras.losses.MSE() tf.metrics.mean_square_error() when larger errors are more significant that smaller errors

Original toplevel document

TfC 01 regression
st_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.legend(); Common regression evaluation metrics keyboard_arrow_down Introduction <span>For regression problems: MAE tf.keras.losses.MAE() tf.metrics.mean_absolute_error() great starter metrics for any regression problem MSE tf.keras.losses.MSE() tf.metrics.mean_square_error() when larger errors are more significant that smaller errors Huber tf.keras.losses.Huber() combintion of MSE and MAE less sensitive to outliers than MSE Take away: You should minimize the time between your experiments (that's way you should start with smaller models). The more experiments you do, the more things you figure out that don't work. Tracking your experiments One really good habit is to track the results of your experiments. There are tools to help us! Resource: Try: Tensorboard - a component of Tensorflow library t







Flashcard 7674297191692

Tags
#tensorflow #tensorflow-certificate
Question
[...] classification - a sample can be assigned to more than one label from more than 2 label options
Answer
Multilabel

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Multilabel classification - a sample can be assigned to more than one label from more than 2 label options

Original toplevel document

TfC_02_classification-PART_1
Types of classification problems Three types of classification problems: binary classification multiclass multilabel Multilabel classification - a sample can be assigned to more than one label from more than 2 label options Multiclass classification - a sample can be assigned to one label but from more than 2 label options Multiclass image classificaton: pizza, steak, sushi Input_shape = [None, 224, 224, 3







Flashcard 7674299026700

Tags
#tensorflow #tensorflow-certificate
Question

loss function:

  • In case of [...] the labels have to be one-hot encoded

Answer
categorical_crossentropy

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
loss function: In case of categorical_crossentropy the labels have to be one-hot encoded

Original toplevel document

TfC_02_classification-PART_2
y-axis -> true label x-axis -> predicted label # Create confusion metrics from sklearn.metrics import confusion_matrix y_preds = model_8.predict(X_test) confusion_matrix(y_test, y_preds) <span>important: This time there is a problem with loss function. In case of categorical_crossentropy the labels have to be one-hot encoded In case of labels as integeres use SparseCategoricalCrossentropy # Get the patterns of a layer in our network weights, biases = model_35.layers[1].get_weights() <span>







Flashcard 7674300861708

Tags
#has-images #tensorflow #tensorflow-certificate
[unknown IMAGE 7626420784396]
Question

Deep Learning mantras: ;)

[...] model: experiment
Evaluation model: visualize

Answer
Building

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Deep Learning mantras: ;) Building model: experiment Evaluation model: visualize

Original toplevel document

TfC 01 regression
more epochs, more data ### How? # from smaller model to larger model Evaluating models Typical workflow: build a model -> fit it -> evaulate -> tweak -> fit > evaluate -> .... <span>Building model: experiment Evaluation model: visualize What can visualize? the data model itself the training of a model predictions ## The 3 sets (or actually 2 sets: training and test set) tf.random.set_seed(999) X_train, X_test = tf.spli