Edited, memorised or added to reading queue

on 07-Oct-2025 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7761968106764

Tags
#conv2D #convolution #tensorflow #tensorflow-certificate
Question

Step 1 is to gather the data. You'll notice that there's a bit of a change here in that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single 4D list that is 60,000x28x28x1, and the same for the test images. If you don't do this, you'll get an error when training as the Convolutions do not recognize the shape.

import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=([...])),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

Answer
28, 28, 1

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Convolution Neural Network - introduction
g_images=training_images.reshape(60000, 28, 28, 1) training_images=training_images / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(<span>28, 28, 1)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, ac







#recurrent-neural-networks #rnn
We show how the proposed deep learning model improves on established models both in terms of individual-level accuracy and overall cohort-level bias. It also helps managers in capturing seasonal trends and other forms of purchase dynamics that are important to detect in a timely manner for the purpose of proactive customer-base management
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
of deep learning models when applied to customer transaction histories in non-contractual business settings (i.e., when the time at which a customer becomes inactive is unobserved by the firm). <span>We show how the proposed deep learning model improves on established models both in terms of individual-level accuracy and overall cohort-level bias. It also helps managers in capturing seasonal trends and other forms of purchase dynamics that are important to detect in a timely manner for the purpose of proactive customer-base management <span>

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7761972038924

Tags
#ML-engineering #ML_in_Action #learning #machine #software-engineering
Question
Generally, these failures happen because the DS team is either inexperienced with solving a problem of the scale required (a technological or process-driven failure) or hasn’t fully understood the desired outcome from the business (a [...] failure)
Answer
communication- driven

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the DS team is either inexperienced with solving a problem of the scale required (a technological or process-driven failure) or hasn’t fully understood the desired outcome from the business (a <span>communication- driven failure) <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7761973873932

Tags
#deep-learning #keras #lstm #python #sequence
Question

The choice of activation function is most important for the output layer as it will define the format that predictions will take...

Multiclass Classification (> 2 class) : [...] activation function and one output neuron per class value, assuming a one hot encoded output pattern.

Answer
Softmax

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The choice of activation function is most important for the output layer as it will define the format that predictions will take... Multiclass Classification (> 2 class) : Softmax activation function, or softmax , and one output neuron per class value, assuming a one hot encoded output pattern.

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence

3 common examples for managing state:

  • (3) A very long sequence was split into multiple subsequences (many samples each with many time steps). Training efficiency is more important than the influence of long-term internal state and a batch size of 128 samples was used, after which network weights are updated and state reset.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
To make this more concrete, below are a 3 common examples for managing state: A prediction is made at the end of each sequence and sequences are independent. State should be reset after each sequence by setting the batch size to 1. A long sequence was split into multiple subsequences (many samples each with many time steps). State should be reset after the network has been exposed to the entire sequence by making the LSTM stateful, turning off the shuffling of subsequences, and resetting the state after each epoch. A very long sequence was split into multiple subsequences (many samples each with many time steps). Training efficiency is more important than the influence of long-term internal state and a batch size of 128 samples was used, after which network weights are updated and state reset. I would encourage you to brainstorm many different framings of your sequence prediction problem and network configurations, test and select those models that appear most promising with

Original toplevel document (pdf)

cannot see any pdfs