Edited, memorised or added to reading queue

on 19-May-2024 (Sun)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7625622031628

Tags
#tensorflow #tensorflow-certificate
Question
y_test.shape, y_pred.shape, y_pred.reshape((10, )).shape, tf.squeeze(y_pred).shape


(TensorShape([10]), (10, 1), (10,), [...])

Answer
TensorShape([10])

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Different shapes of the tensors
y_test.shape, y_pred.shape, y_pred.reshape((10, )).shape, tf.squeeze(y_pred).shape (TensorShape([10]), (10, 1), (10,), TensorShape([10]))







Flashcard 7626518826252

Tags
#tensorflow #tensorflow-certificate
Question
Preprocessing data

ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="[...]") #other columns unchangaed
ct.fit(X_train) X_train_transformed = ct.transform(X_train)
X_test_transformed = ct.transform(X_test)
Answer
passthrough

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test)

Original toplevel document

TfC_01_ADDITIONAL_01_Abalone.ipynb
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test) Predictions valuation_predicts = model.predict(X_valuation_transformed) (array([[ 9.441547], [10.451973], [10.48082 ], ..., [10.401164], [13.13452 ], [ 8.081818]], dtype=float32), (6041







Flashcard 7626520923404

Tags
#tensorflow #tensorflow-certificate
Question

Preprocessing data

ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed
ct.[...](X_train) 
X_train_transformed = ct.transform(X_train)
X_test_transformed = ct.transform(X_test)
Answer
fit

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test)

Original toplevel document

TfC_01_ADDITIONAL_01_Abalone.ipynb
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test) Predictions valuation_predicts = model.predict(X_valuation_transformed) (array([[ 9.441547], [10.451973], [10.48082 ], ..., [10.401164], [13.13452 ], [ 8.081818]], dtype=float32), (6041







[unknown IMAGE 7626420784396] #has-images #tensorflow #tensorflow-certificate

Deep Learning mantras: ;)

Building model: experiment
Evaluation model: visualize

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC 01 regression
more epochs, more data ### How? # from smaller model to larger model Evaluating models Typical workflow: build a model -> fit it -> evaulate -> tweak -> fit > evaluate -> .... <span>Building model: experiment Evaluation model: visualize What can visualize? the data model itself the training of a model predictions ## The 3 sets (or actually 2 sets: training and test set) tf.random.set_seed(999) X_train, X_test = tf.spli




Flashcard 7627463068940

Tags
#deep-learning #keras #lstm #python #sequence
Question

For example, we can define an LSTM hidden layer with 2 memory cells followed by a Dense output layer with 1 neuron as follows:

model = Sequential()
model.add([...])
model.add(Dense(1))

Answer
LSTM(2)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
For example, we can define an LSTM hidden layer with 2 memory cells followed by a Dense output layer with 1 neuron as follows: model = Sequential() model.add(LSTM(2)) model.add(Dense(1))

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7627466476812

Tags
#tensorflow #tensorflow-certificate
Question

import tensorflow as tf

#stop training after reaching accuract of 0.99
class MyCallback(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, [...]):
    if logs.get('accuracy')>=0.99:
      print('\nAccuracy 0.99 achieved')
      self.model.stop_training = True

Answer
logs={}

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow - callbacks
import tensorflow as tf #stop training after reaching accuract of 0.99 class MyCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if logs.get('accuracy')>=0.99: print('\nAccuracy 0.99 achieved') self.model.stop_training = True







Flashcard 7627468311820

Tags
#tensorflow #tensorflow-certificate
Question

Preprocessing data

ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed
ct.fit([...]) 
X_train_transformed = ct.transform(X_train)
X_test_transformed = ct.transform(X_test)
Answer
X_train

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test)

Original toplevel document

TfC_01_ADDITIONAL_01_Abalone.ipynb
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test) Predictions valuation_predicts = model.predict(X_valuation_transformed) (array([[ 9.441547], [10.451973], [10.48082 ], ..., [10.401164], [13.13452 ], [ 8.081818]], dtype=float32), (6041







Flashcard 7627469360396

Tags
#tensorflow #tensorflow-certificate
Question

Preprocessing data

ct = make_column_transformer(([...](dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed
ct.fit(X_train) 
X_train_transformed = ct.transform(X_train)
X_test_transformed = ct.transform(X_test)
Answer
OneHotEncoder

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test)

Original toplevel document

TfC_01_ADDITIONAL_01_Abalone.ipynb
Preprocessing data ct = make_column_transformer((OneHotEncoder(dtype="int32"), ['Sex']), remainder="passthrough") #other columns unchangaed ct.fit(X_train) X_train_transformed = ct.transform(X_train) X_test_transformed = ct.transform(X_test) Predictions valuation_predicts = model.predict(X_valuation_transformed) (array([[ 9.441547], [10.451973], [10.48082 ], ..., [10.401164], [13.13452 ], [ 8.081818]], dtype=float32), (6041







#tensorflow #tensorflow-certificate
F1-score

Combination of precision and recall, ususally a good overall metric for classification models.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC_02_classification-PART_2
anced classes Precision For imbalanced class problems. Higher precision leads to less false positives. Recall Higher recall leads to less false negatives. Tradeoff between recall and precision. <span>F1-score Combination of precision and recall, ususally a good overall metric for classification models. keyboard_arrow_down Confusion matrix Can be hard to use whith large numbers of classes. y-axis -> true label x-axis -> predicted label # Create confusion metrics from sklearn.metr




#tensorflow #tensorflow-certificate

# Get the patterns of a layer in our network

weights, biases = model_35.layers[1].get_weights()

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC_02_classification-PART_2
tant: This time there is a problem with loss function. In case of categorical_crossentropy the labels have to be one-hot encoded In case of labels as integeres use SparseCategoricalCrossentropy <span># Get the patterns of a layer in our network weights, biases = model_35.layers[1].get_weights() <span>




#tensorflow #tensorflow-certificate
Confusion matrix

Can be hard to use whith large numbers of classes.

y-axis -> true label
x-axis -> predicted label

# Create confusion metrics

from sklearn.metrics import confusion_matrix

y_preds = model_8.predict(X_test)

confusion_matrix(y_test, y_preds)

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC_02_classification-PART_2
leads to less false negatives. Tradeoff between recall and precision. F1-score Combination of precision and recall, ususally a good overall metric for classification models. keyboard_arrow_down <span>Confusion matrix Can be hard to use whith large numbers of classes. y-axis -> true label x-axis -> predicted label # Create confusion metrics from sklearn.metrics import confusion_matrix y_preds = model_8.predict(X_test) confusion_matrix(y_test, y_preds) important: This time there is a problem with loss function. In case of categorical_crossentropy the labels have to be one-hot encoded In case of labels as integeres use SparseCategorica




Flashcard 7627475651852

Tags
#has-images #tensorflow #tensorflow-certificate
[unknown IMAGE 7626420784396]
Question

How we can improve model (in the particular stage of the process)?

# 1. [...] model: add more layers, increase numbers of hidden neurons, change activation functions

Answer
Creating

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
How we can improve model (in the particular stage of the process)? # 1. Creating model: add more layers, increase numbers of hidden neurons, change activation functions

Original toplevel document

TfC 01 regression
#### How we can improve model # 1. Creating model: add more layers, increase numbers of hidden neurons, change activation functions # 2. Compiling: change optimizer or its parameters (eg. learning rate) # 3. Fitting: more epochs, more data ### How? # from smaller model to larger model Evaluating models Typical workflow: build a model -> fit it -> evaulate -> tweak -> fit > evaluate -> .... Building model: experiment Evaluation model: visualize What







#tensorflow #tensorflow-certificate

Multiclass image classificaton: pizza, steak, sushi

Input_shape = [None, 224, 224, 3] - single image

Input shape = [32, 224, 224, 3] - common batch size of images

32 is a common batch size

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC_02_classification-PART_1
ssification - a sample can be assigned to more than one label from more than 2 label options Multiclass classification - a sample can be assigned to one label but from more than 2 label options <span>Multiclass image classificaton: pizza, steak, sushi Input_shape = [None, 224, 224, 3] - single image Input shape = [32, 224, 224, 3] - common batch size of images 32 is a common batch size How to generate such data? from sklearn.datasets import make_circles # Make 1000 examples n_samples=1000 # Create circles X, y = make_circles(n_samples, noise=0.03, random_state=42) How




Flashcard 7627480108300

Tags
#has-images #tensorflow #tensorflow-certificate
[unknown IMAGE 7626420784396]
Question

Deep Learning mantras: ;)

Building model: [...]
Evaluation model: visualize

Answer
experiment

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Deep Learning mantras: ;) Building model: experiment Evaluation model: visualize

Original toplevel document

TfC 01 regression
more epochs, more data ### How? # from smaller model to larger model Evaluating models Typical workflow: build a model -> fit it -> evaulate -> tweak -> fit > evaluate -> .... <span>Building model: experiment Evaluation model: visualize What can visualize? the data model itself the training of a model predictions ## The 3 sets (or actually 2 sets: training and test set) tf.random.set_seed(999) X_train, X_test = tf.spli







Flashcard 7627481943308

Tags
#tensorflow #tensorflow-certificate
Question
Recall

Higher recall leads to less false [...].

Answer
negatives

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Recall Higher recall leads to less false negatives.

Original toplevel document

TfC_02_classification-PART_2
ccuracy tf.keras.metrics.Accuracy() sklearn.metrics.accuracy_score() Not the best for imbalanced classes Precision For imbalanced class problems. Higher precision leads to less false positives. <span>Recall Higher recall leads to less false negatives. Tradeoff between recall and precision. F1-score Combination of precision and recall, ususally a good overall metric for classification models. keyboard_arrow_down Confusion matrix Can b







Flashcard 7627483778316

Tags
#tensorflow #tensorflow-certificate
Question
Precision

For [...] problems. Higher precision leads to less false positives.

Answer
imbalanced class

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Precision For imbalanced class problems. Higher precision leads to less false positives.

Original toplevel document

TfC_02_classification-PART_2
Classification evaluation methods Accuracy tf.keras.metrics.Accuracy() sklearn.metrics.accuracy_score() Not the best for imbalanced classes Precision For imbalanced class problems. Higher precision leads to less false positives. Recall Higher recall leads to less false negatives. Tradeoff between recall and precision. F1-score Combination of precision and recall, ususally a good overall metric for classificatio







[unknown IMAGE 7626420784396] #has-images #tensorflow #tensorflow-certificate

)

def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_pred): 
   """ Plots training data, testing_data """ 
   plt.figure(figsize=(10, 7)) 
   plt.scatter(train_data, train_labels, c="blue", label='Training data') 
   plt.scatter(test_data, test_labels, c="green", label="Testing data") 
   plt.scatter(test_data, predictions, c="red", label="Predictions") 
   plt.legend();

C

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC 01 regression
ing of a model predictions ## The 3 sets (or actually 2 sets: training and test set) tf.random.set_seed(999) X_train, X_test = tf.split(tf.random.shuffle(X, seed=42), num_or_size_splits=[40, 10]<span>) def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_pred): """ Plots training data, testing_data """ plt.figure(figsize=(10, 7)) plt.scatter(train_data, train_labels, c="blue", label='Training data') plt.scatter(test_data, test_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.legend(); Common regression evaluation metrics keyboard_arrow_down Introduction For regression problems: MAE tf.keras.losses.MAE() tf.metrics.mean_absolute_error() great starter metrics for any reg




[unknown IMAGE 7626420784396] #has-images #tensorflow #tensorflow-certificate

X_train, X_test = tf.split(tf.random.shuffle(X, seed=42), num_or_size_splits=[40, 10])

def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_pred): """ Plots training data, testing_data """ plt.figure(figsize=(10, 7)) plt.scatter(train_data, train_labels, c="blue", label='Training data') plt.scatter(test_data, test_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.legend();

Common regression evaluation metrics

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC 01 regression
iment Evaluation model: visualize What can visualize? the data model itself the training of a model predictions ## The 3 sets (or actually 2 sets: training and test set) tf.random.set_seed(999) <span>X_train, X_test = tf.split(tf.random.shuffle(X, seed=42), num_or_size_splits=[40, 10]) def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_pred): """ Plots training data, testing_data """ plt.figure(figsize=(10, 7)) plt.scatter(train_data, train_labels, c="blue", label='Training data') plt.scatter(test_data, test_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.legend(); Common regression evaluation metrics keyboard_arrow_down Introduction For regression problems: MAE tf.keras.losses.MAE() tf.metrics.mean_absolute_error() great starter metrics for any regression problem MSE tf.keras.losses