Edited, memorised or added to reading queue

on 07-May-2024 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7624074333452

Tags
#tensorflow #tensorflow-certificate
Question

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import numpy as np

model = Sequential(Dense(1, input_shape=[...]))
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([1,5,12,-1,10], dtype=float)
ys = np.array([5,13,27,1,23], dtype=float)
model.fit(xs, ys, epochs=500)
model.predict(x=[15])

Answer
[1]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow basics - typical flow of model building
from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense import numpy as np model = Sequential(Dense(1, input_shape=[1])) model.compile(optimizer='sgd', loss='mean_squared_error') xs = np.array([1,5,12,-1,10], dtype=float) ys = np.array([5,13,27,1,23], dtype=float) model.fit(xs, ys, epochs=500) model.pre







Flashcard 7625631468812

Tags
#tensorflow #tensorflow-certificate
Question
# Calculate MSE "by hand" in steps - identify functions

abs_err = tf.abs(tf.subtract(tf.cast(y_test, dtype=tf.float32), tf.squeeze(y_pred)))
sq_abs_err = tf.multiply(abs_err, abs_err)
sq_abs_err
tf.[...](sq_abs_err)  # mean squared error



<tf.Tensor: shape=(), dtype=float32, numpy=155.11417>

Answer
math.reduce_mean

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Calculate MSE &quot;by hand&quot; in steps - identify functions
lculate MSE "by hand" in steps - identify functions abs_err = tf.abs(tf.subtract(tf.cast(y_test, dtype=tf.float32), tf.squeeze(y_pred))) sq_abs_err = tf.multiply(abs_err, abs_err) sq_abs_err tf.<span>math.reduce_mean(sq_abs_err) <tf.Tensor: shape=(), dtype=float32, numpy=155.11417> <span>







[unknown IMAGE 7626420784396]
TfC 01 regression
#has-images #tensorflow #tensorflow-certificate

#### How we can improve model

# 1. Creating model: add more layers, increase numbers of hidden neurons, change activation functions

# 2. Compiling: change optimizer or its parameters (eg. learning rate)

# 3. Fitting: more epochs, more data

### How?

# from smaller model to larger model

Evaluating models

Typical workflow: build a model -> fit it -> evaulate -> tweak -> fit > evaluate -> ....

Building model: experiment Evaluation model: visualize

What can visualize?

  • the data
  • model itself
  • the training of a model
  • predictions

## The 3 sets (or actually 2 sets: training and test set)

tf.random.set_seed(999)

X_train, X_test = tf.split(tf.random.shuffle(X, seed=42), num_or_size_splits=[40, 10])

def plot_predictions(train_data = X_train,
                     train_labels = y_train,
                     test_data = X_test,
                     test_labels = y_test,
                     predictions = y_pred):
  """
  Plots training data, testing_data
  """
  plt.figure(figsize=(10, 7))
  plt.scatter(train_data, train_labels, c="blue", label='Training data')
  plt.scatter(test_data, test_labels, c="green", label="Testing data")
  plt.scatter(test_data, predictions, c="red", label="Predictions")
  plt.legend();

Common regression evaluation metrics


Introduction


For regression problems:

  • MAE
    • tf.keras.losses.MAE()
    • tf.metrics.mean_absolute_error()
    • great starter metrics for any regression problem
  • MSE
    • tf.keras.losses.MSE()
    • tf.metrics.mean_square_error()
    • when larger errors are more significant that smaller errors
  • Huber
    • tf.keras.losses.Huber()
    • combintion of MSE and MAE less sensitive to outliers than MSE

Take away: You should minimize the time between your experiments (that's way you should start with smaller models). The more experiments you do, the more things you figure out that don't work.

Tracking your experiments

One really good habit is to track the results of your experiments. There are tools to help us!

Resource: Try:

  • Tensorboard - a component of Tensorflow library to help track modelling experiments
  • Weights & Biases

Saving and loading models

Two formats:

  • SavedModel format (including optimizer's step)
  • HDF5 format

What about TensorFlow Serving format?

# Save the entire model using SavedModel
model_3.save("best_model_3_SavedModel")
# SavedModel is in principle protobuff)pb file

# Save model in HDF5 format:
model_3.save("best_model_3_HDF5.h5")

Load model

loaded_model_SM = tf.keras.models.load_model('/content/best_model_3_SavedModel')
loaded_model_SM.summary()

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




TfC_01_FINAL_EXAMPLE.ipynb
#tensorflow #tensorflow-certificate

Getting dataset ready for tensorflow

  1. Converting non-numerical columns

For example: Use pandas get_dummies() function

insurance_one_hot = pd.get_dummies(insurance,dtype="int32") #to avoid bool which generate problem with model fitting in TensorFlow
insurance_one_hot

# Create X and y values (features and labels)
y = insurance_one_hot['charges']
X = insurance_one_hot.drop('charges', axis=1)

#y = y.values  # This is not necessary
#X = X.values
#X, y, X.shape, y.shape

# Create training and test datasets
#my way:
from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42)

Preprocessing data (normalization and standardization)

Preprocessing steps:

  1. Turn all data into numbers
  2. Make sure your tensors are in the right shape
  3. Scale features (normalize or standardize) Neural networks tend to prefer normalization.

Normalization - adjusting values measured on different scales to a notionally common scale

Normalization

# Start from scratch
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf

## Borrow a few classes from sci-kit learn
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split

#Create column transformer
ct = make_column_transformer((MinMaxScaler(), ['age', 'bmi', 'children']), # turn all values in these columns between 0 and 1
                             (OneHotEncoder(handle_unknown='ignore', dtype="int32"), ['sex', 'smoker', 'region']))

# Create X and y
X = insurance.drop('charges', axis=1)
y = insurance['charges']

# Split datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Fit the column transformer on training data and apply to both datasets (train and test)
ct.fit(X_train)

# Transform data
X_train_normalize = ct.transform(X_train)
X_test_normalize = ct.transform(X_test)

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




TfC_01_ADDITIONAL_01_Abalone.ipynb
#tensorflow #tensorflow-certificate

Preprocessing data

ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed
ct.fit(X_train)

X_train_transformed = ct.transform(X_train)
X_test_transformed = ct.transform(X_test)

Predictions

valuation_predicts = model.predict(X_valuation_transformed)

(array([[ 9.441547],
        [10.451973],
        [10.48082 ],
        ...,
        [10.401164],
        [13.13452 ],
        [ 8.081818]], dtype=float32),
 (60411, 1))

valuation_predicts_squeezed = tf.squeeze(valuation_predicts)

submitt_data = pd.DataFrame({'id': data_test['id'],
                             'Rings': valuation_predicts_squeezed})

#Make sure that Min number of rings = 1
submitt_data.loc[submitt_data['Rings'] < 1, 'Rings'] = 1
submitt_data

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




#tensorflow #tensorflow-certificate

Preprocessing data

ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed
ct.fit(X_train) 
X_train_transformed = ct.transform(X_train)
X_test_transformed = ct.transform(X_test)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC_01_ADDITIONAL_01_Abalone.ipynb
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test) Predictions valuation_predicts = model.predict(X_valuation_transformed) (array([[ 9.441547], [10.451973], [10.48082 ], ..., [10.401164], [13.13452 ], [ 8.081818]], dtype=float32), (6041




Flashcard 7626432843020

Tags
#tensorflow #tensorflow-certificate
Question

from tensorflow.keras.utils import plot_model

plot_model(model, [...]=True)

Answer
show_shapes

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Open it
from tensorflow.keras.utils import plot_model plot_model(model, show_shapes=True)







Flashcard 7626434940172

Tags
#tensorflow #tensorflow-certificate
Question
# Calculate MSE "by hand" in steps - identify functions

abs_err = tf.abs(tf.subtract(tf.cast(y_test, dtype=tf.float32), tf.squeeze(y_pred)))
sq_abs_err = [...](abs_err, abs_err)
sq_abs_err
tf.math.reduce_mean(sq_abs_err)



<tf.Tensor: shape=(), dtype=float32, numpy=155.11417>

Answer
tf.multiply

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Calculate MSE &quot;by hand&quot; in steps - identify functions
# Calculate MSE "by hand" in steps - identify functions abs_err = tf.abs(tf.subtract(tf.cast(y_test, dtype=tf.float32), tf.squeeze(y_pred))) sq_abs_err = tf.multiply(abs_err, abs_err) sq_abs_err tf.math.reduce_mean(sq_abs_err) <tf.Tensor: shape=(), dtype=float32, numpy=155.11417>







Flashcard 7626436775180

Tags
#conv2D #convolution #tensorflow #tensorflow-certificate
Question
Step 1 is to gather the data. You'll notice that there's a bit of a change here in that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single [...dimensions?] list that is 60,000x28x28x1,
Answer
4D

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single <span>4D list that is 60,000x28x28x1, <span>

Original toplevel document

Convolution Neural Network - introduction
Step 1 is to gather the data. You'll notice that there's a bit of a change here in that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single 4D list that is 60,000x28x28x1, and the same for the test images. If you don't do this, you'll get an error when training as the Convolutions do not recognize the shape. import tensorflow as tf mnist = tf.keras.datase







Flashcard 7626438610188

Tags
#tensorflow #tensorflow-certificate
Question
# [...] can be indexed just like Python lists.

# Get the first 2 elements of each dimension
A[:2, :2, :2, :2]

Answer
Tensors

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensors indexing
# Tensors can be indexed just like Python lists. # Get the first 2 elements of each dimension A[:2, :2, :2, :2]







Flashcard 7626440445196

Tags
#tensorflow #tensorflow-certificate
Question
#### Def some functions to calculate losses (mae, mse)

def evaluate_mae(y_true, y_pred):
  return tf.keras.[...].mean_absolute_error(y_true = y_true,
                                      y_pred = tf.squeeze(y_pred))

Answer
losses

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Open it
#### Def some functions to calculate losses (mae, mse) def evaluate_mae(y_true, y_pred): return tf.keras.losses.mean_absolute_error(y_true = y_true, y_pred = tf.squeeze(y_pred))







Flashcard 7626441755916

Tags
#tensorflow #tensorflow-certificate
Question
From tensorflow version 2.7.0 model.fit() no longer automatically upscales inputs from shape (batch_size, ) to ([...]).
Answer
batch_size, 1

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Changes in tensorflow
From tensorflow version 2.7.0 model.fit() no longer automatically upscales inputs from shape (batch_size, ) to (batch_size, 1).







Flashcard 7626443590924

Tags
#tensorflow #tensorflow-certificate
Question

another_matrix = tf.constant([[10. ,66.],
                              [5. , 9.],
                              [13. , 4.]], dtype=tf.float16)
another_matrix

Out:
<tf.Tensor: shape=([...]), dtype=float16, numpy=
array([[10., 66.],
       [ 5.,  9.],
       [13.,  4.]], dtype=float16)>

Answer
3, 2

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow fundamentals
another_matrix = tf.constant([[10. ,66.], [5. , 9.], [13. , 4.]], dtype=tf.float16) another_matrix Out: <tf.Tensor: shape=(3, 2), dtype=float16, numpy= array([[10., 66.], [ 5., 9.], [13., 4.]], dtype=float16)>