Edited, memorised or added to reading queue

on 24-Oct-2024 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#deep-learning #keras #lstm #python #sequence

The choice of activation function depending on problem

Regression: Linear activation function and the number of neurons matching the number of outputs. This is the default activation function used for neurons in the Dense layer.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
format that predictions will take. For example, below are some common predictive modeling problem types and the structure and standard activation function that you can use in the output layer: <span>Regression: Linear activation function, or linear , and the number of neurons matching the number of outputs. This is the default activation function used for neurons in the Dense layer. Binary Classification (2 class) : Logistic activation function, or sigmoid , and one neuron the output layer. Multiclass Classification (> 2 class) : Softmax activation function, or

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7656685047052

Tags
#recurrent-neural-networks #rnn
Question
The simple behavioral story which sits at the core of BTYD models – while ”alive”, customers make purchases until they drop out – gives these models robust predictive power, especially on the aggregate cohort level, and over a [...] time horizon.
Answer
long

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
sits at the core of BTYD models – while ”alive”, customers make purchases until they drop out – gives these models robust predictive power, especially on the aggregate cohort level, and over a <span>long time horizon. <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7656897645836

Tags
#has-images #tensorflow #tensorflow-certificate
[unknown IMAGE 7626420784396]
Question

X_train, X_test = tf.[...](tf.random.shuffle(X, seed=42), num_or_size_splits=[40, 10])

def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_pred): """ Plots training data, testing_data """ plt.figure(figsize=(10, 7)) plt.scatter(train_data, train_labels, c="blue", label='Training data') plt.scatter(test_data, test_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.legend();

Common regression evaluation metrics

Answer
split

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
X_train, X_test = tf.split(tf.random.shuffle(X, seed=42), num_or_size_splits=[40, 10]) def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_

Original toplevel document

TfC 01 regression
iment Evaluation model: visualize What can visualize? the data model itself the training of a model predictions ## The 3 sets (or actually 2 sets: training and test set) tf.random.set_seed(999) <span>X_train, X_test = tf.split(tf.random.shuffle(X, seed=42), num_or_size_splits=[40, 10]) def plot_predictions(train_data = X_train, train_labels = y_train, test_data = X_test, test_labels = y_test, predictions = y_pred): """ Plots training data, testing_data """ plt.figure(figsize=(10, 7)) plt.scatter(train_data, train_labels, c="blue", label='Training data') plt.scatter(test_data, test_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.legend(); Common regression evaluation metrics keyboard_arrow_down Introduction For regression problems: MAE tf.keras.losses.MAE() tf.metrics.mean_absolute_error() great starter metrics for any regression problem MSE tf.keras.losses







Flashcard 7662704659724

Tags
#deep-learning #keras #lstm #python #sequence
Question

The choice of activation function depending on problem

Regression: [...] activation function and the number of neurons matching the number of outputs. This is the default activation function used for neurons in the Dense layer.

Answer
Linear

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The choice of activation function depending on problem Regression: Linear activation function and the number of neurons matching the number of outputs. This is the default activation function used for neurons in the Dense layer.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7662706494732

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
As the number of hyperparameters and their range grow, the search space becomes exponentially complex, and tuning the models manually or by grid-search becomes impractical . [...] optimization for hyperparameter tuning provides hyperparameters (step 1) iteratively based on previous performance
Answer
Bayesian

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
As the number of hyperparameters and their range grow, the search space becomes exponentially complex, and tuning the models manually or by grid-search becomes impractical . Bayesian optimization for hyperparameter tuning provides hyperparameters (step 1) iteratively based on previous performance

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7662708854028

Tags
#causality #statistics
Question
Association still flows in exactly the same way in Bayesian networks as it does in [...] graphs, though. In both, association flows along chains and forks, unless a node is conditioned on. And in both, a collider blocks the flow of association, unless it is conditioned on.
Answer
causal

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Association still flows in exactly the same way in Bayesian networks as it does in causal graphs, though. In both, association flows along chains and forks, unless a node is conditioned on. And in both, a collider blocks the flow of association, unless it is conditioned on.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7662710951180

Tags
#causality #statistics
Question
[...] - we can make this assumption realistic by running randomized experiments, which force the treatment to not be caused by anything but a coin toss, so then we have the causal structure shown in Figure 2.2.
Answer
ignorability

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ignorability - we can make this assumption realistic by running randomized experiments, which force the treatment to not be caused by anything but a coin toss, so then we have the causal structure s

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7662712786188

Tags
#English #vocabulary
Question

[...]

noun [ C ]

UK /ˈækrəʊnɪm/ US

a word made from the first letters of other words

Answer
acronym

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
acronym noun [ C ] UK /ˈækrəʊnɪm/ US a word made from the first letters of other words

Original toplevel document

Open it
acronym noun [ C ] UK /ˈækrəʊnɪm/ US a word made from the first letters of other words akronim, skrótowiec AIDS is the acronym for 'acquired immune deficiency syndrome'.







Flashcard 7662714621196

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
Since customer transactions occur sequentially, they can be modeled as a [...] task using an RNN as well, where all firm actions and customer responses are represented by elements in a vector.
Answer
sequence prediction

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Since customer transactions occur sequentially, they can be modeled as a sequence prediction task using an RNN as well, where all firm actions and customer responses are represented by elements in a vector.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7662718291212

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #retail #simulation #synthetic-data
Question
In this paper, we present our study of consumer purchase behaviour, wherein, we establish a data-driven framework to predict whether a consumer is going to purchase an item within a certain time frame using e-commerce retail data. To model this relationship, we create a sequential [...] data for all relevant consumer-item combinations. We then build generalized non-linear models by generating features at the intersection of consumer, item, and time.
Answer
time-series

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
h a data-driven framework to predict whether a consumer is going to purchase an item within a certain time frame using e-commerce retail data. To model this relationship, we create a sequential <span>time-series data for all relevant consumer-item combinations. We then build generalized non-linear models by generating features at the intersection of consumer, item, and time. <span>

Original toplevel document (pdf)

cannot see any pdfs







#feature-engineering #lstm #recurrent-neural-networks #rnn
the RNN has distributed hidden states, which means that each input generally results in changes across all the hidden units of the RNN (Ming et al., 2017).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
screte hidden states (where N is typically small) and, therefore, has only log 2 (N) bits of information available to capture the sequence history (Brown & Hinton, 2001). On the other hand, <span>the RNN has distributed hidden states, which means that each input generally results in changes across all the hidden units of the RNN (Ming et al., 2017). RNNs combine a large number of distributed hidden states with nonlinear dynamics to update these hidden states, thereby allowing it to have a more substantial representational capacity

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7662721961228

Tags
#tensorflow #tensorflow-certificate
Question

import tensorflow as tf

#stop training after reaching accuract of 0.99
class MyCallback(tf.keras.callbacks.Callback):
  def on_epoch_end(self, [...], logs={}):
    if logs.get('accuracy')>=0.99:
      print('\nAccuracy 0.99 achieved')
      self.model.stop_training = True

Answer
epoch

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow - callbacks
import tensorflow as tf #stop training after reaching accuract of 0.99 class MyCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if logs.get('accuracy')>=0.99: print('\nAccuracy 0.99 achieved') self.model.stop_training = True







#data-science #infrastructure
incidental complexity = complexity that is not necessitated by the problem itself but is an unwanted artifact of a chosen approach
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
we must avoid introducing incidental complexity, or complexity that is not necessitated by the problem itself but is an unwanted artifact of a chosen approach. Incidental complexity is a huge problem for real-world data science because we have to deal with such a high level of inherent complexity that distinguishing between real problems and

Original toplevel document (pdf)

cannot see any pdfs