Edited, memorised or added to reading queue

on 03-Apr-2025 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7693374197004

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
the RNN has [...] hidden states, which means that each input generally results in changes across all the hidden units of the RNN (Ming et al., 2017).
Answer
distributed

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the RNN has distributed hidden states, which means that each input generally results in changes across all the hidden units of the RNN (Ming et al., 2017).

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7693376294156

Tags
#deep-learning #keras #lstm #python #sequence
Question

The choice of activation function depending on problem

Regression: Linear activation function and the number of neurons matching the number of [...]. This is the default activation function used for neurons in the Dense layer.

Answer
outputs

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The choice of activation function depending on problem Regression: Linear activation function and the number of neurons matching the number of outputs. This is the default activation function used for neurons in the Dense layer.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7693378129164

Tags
#tensorflow #tensorflow-certificate
Question
Normalization
# Start from scratch
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf ## Borrow a few classes from sci-kit learn
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split #Create column transformer
ct = make_column_transformer(([...], ['age', 'bmi', 'children']), # turn all values in these columns between 0 and 1 (OneHotEncoder(handle_unknown='ignore', dtype="int32"), ['sex', 'smoker', 'region'])) # Create X and y
X = insurance.drop('charges', axis=1)
y = insurance['charges'] # Split datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Fit the column transformer on training data and apply to both datasets (train and test)
ct.fit(X_train) # Transform data
X_train_normalize = ct.transform(X_train)
X_test_normalize = ct.transform(X_test)


Answer
MinMaxScaler()

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ke_column_transformer from sklearn.preprocessing import MinMaxScaler, OneHotEncoder from sklearn.model_selection import train_test_split #Create column transformer ct = make_column_transformer((<span>MinMaxScaler(), ['age', 'bmi', 'children']), # turn all values in these columns between 0 and 1 (OneHotEncoder(handle_unknown='ignore', dtype="int32"), ['sex', 'smoker', 'region'])) # Create X and y X

Original toplevel document

TfC_01_FINAL_EXAMPLE.ipynb
he right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization. Normalization - adjusting values measured on different scales to a notionally common scale <span>Normalization # Start from scratch import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf ## Borrow a few classes from sci-kit learn from sklearn.compose import make_column_transformer from sklearn.preprocessing import MinMaxScaler, OneHotEncoder from sklearn.model_selection import train_test_split #Create column transformer ct = make_column_transformer((MinMaxScaler(), ['age', 'bmi', 'children']), # turn all values in these columns between 0 and 1 (OneHotEncoder(handle_unknown='ignore', dtype="int32"), ['sex', 'smoker', 'region'])) # Create X and y X = insurance.drop('charges', axis=1) y = insurance['charges'] # Split datasets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Fit the column transformer on training data and apply to both datasets (train and test) ct.fit(X_train) # Transform data X_train_normalize = ct.transform(X_train) X_test_normalize = ct.transform(X_test) <span>