# on 22-Apr-2024 (Mon)

#### Annotation 7103894916364

 #feature-engineering #lstm #recurrent-neural-networks #rnn All four customers in the figure have the same seniority (date of first purchase), recency (date of last purchase), and frequency (number of purchases). However, each of them has a visibly different transaction pattern. A response model relying exclusively on seniority, recency, and frequency would not be able to distinguish between customers who have similar features but different behavioral sequence.

#### pdf

cannot see any pdfs

#### Flashcard 7624068041996

Tags
#tensorflow #tensorflow-certificate
Question

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import numpy as np

model = Sequential(Dense(1, input_shape=[1]))
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([1,5,12,-1,10], dtype=float)
ys = np.array([5,13,27,1,23], dtype=float)
model.fit(xs, ys, epochs=500)
model.[...](x=[15])

predict

status measured difficulty not learned 37% [default] 0

Tensorflow basics - typical flow of model building
t_shape=[1])) model.compile(optimizer='sgd', loss='mean_squared_error') xs = np.array([1,5,12,-1,10], dtype=float) ys = np.array([5,13,27,1,23], dtype=float) model.fit(xs, ys, epochs=500) model.<span>predict(x=[15]) <span>

#### Flashcard 7624070139148

Tags
#tensorflow #tensorflow-certificate
Question

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import numpy as np

model = Sequential(Dense(1, input_shape=[1]))
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([1,5,12,-1,10], dtype=float)
ys = np.array([5,13,27,1,23], dtype=float)
model.fit(xs, ys, [...]=500)
model.predict(x=[15])

epochs

status measured difficulty not learned 37% [default] 0

Tensorflow basics - typical flow of model building
tial(Dense(1, input_shape=[1])) model.compile(optimizer='sgd', loss='mean_squared_error') xs = np.array([1,5,12,-1,10], dtype=float) ys = np.array([5,13,27,1,23], dtype=float) model.fit(xs, ys, <span>epochs=500) model.predict(x=[15]) <span>

#### Flashcard 7624091372812

Tags
#tensorflow #tensorflow-certificate
Question

import tensorflow as tf

#stop training after reaching accuract of 0.99
class MyCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs.get('accuracy')>=0.99:
print('\nAccuracy 0.99 achieved')
[...].model.stop_training = True


self

status measured difficulty not learned 37% [default] 0

Tensorflow - callbacks
aining after reaching accuract of 0.99 class MyCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if logs.get('accuracy')>=0.99: print('\nAccuracy 0.99 achieved') <span>self.model.stop_training = True <span>

#### Flashcard 7625074937100

Tags
#deep-learning #keras #lstm #python #sequence
Question
By default, the samples within an epoch are shuffled. This is a good practice when working with [...] neural networks. If you are trying to preserve state across samples, then the order of samples in the training dataset may be important and must be preserved. This can be done by setting the shuffle argument in the fit() function to False.
Multilayer Perceptron

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
By default, the samples within an epoch are shuffled. This is a good practice when working with Multilayer Perceptron neural networks. If you are trying to preserve state across samples, then the order of samples in the training dataset may be important and must be preserved. This can be done by settin

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7625078082828

Tags
#bayes #programming #r #statistics
Question
The posterior distribution also shows the uncertainty in that estimated slope, because the distribution shows the relative [...] of values across the continuum.
credibility

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The posterior distribution also shows the uncertainty in that estimated slope, because the distribution shows the relative credibility of values across the continuum.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7625093025036

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
All four customers in the figure have the same seniority (date of first purchase), recency (date of last purchase), and frequency (number of purchases). However, each of them has a visibly different [...]. A response model relying exclusively on seniority, recency, and frequency would not be able to distinguish between customers who have similar features but different behavioral sequence.
transaction pattern

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
our customers in the figure have the same seniority (date of first purchase), recency (date of last purchase), and frequency (number of purchases). However, each of them has a visibly different <span>transaction pattern. A response model relying exclusively on seniority, recency, and frequency would not be able to distinguish between customers who have similar features but different behavioral sequence

#### Original toplevel document (pdf)

cannot see any pdfs

#### Annotation 7625095122188

 #feature-engineering #lstm #recurrent-neural-networks #rnn Feature engineering has been used broadly to refer to multiple aspects of feature creation, extraction, and transformation

#### Parent (intermediate) annotation

Open it
In machine learning, a feature refers to a variable that describes some aspect of individual data objects (Dong & Liu, 2018). Feature engineering has been used broadly to refer to multiple aspects of feature creation, extraction, and transformation. Essentially, it refers to the process of using domain knowledge to create useful features that can be fed as predictors into a model.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Annotation 7625096957196

 #deep-learning #keras #lstm #python #sequence A time window based MLP outperformed the LSTM pure-[autoregression] approach on certain time series prediction benchmarks solvable by looking at a few recent inputs only.

#### Parent (intermediate) annotation

Open it
A time window based MLP outperformed the LSTM pure-[autoregression] approach on certain time series prediction benchmarks solvable by looking at a few recent inputs only. Thus LSTM’s special strength, namely, to learn to remember single events for very long, unknown time periods, was not necessary

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7625099054348

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question
While preprocessing is an important tool to improve model performance, it artificially increases the [...] of the input vector. Also, the resulting binary features can be strongly correlated. Both outcomes make it difficult to tell which action patterns in the underlying consumer histories have a strong impact on the prediction outcome
dimensionality

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
While preprocessing is an important tool to improve model performance, it artificially increases the dimensionality of the input vector. Also, the resulting binary features can be strongly correlated. Both outcomes make it difficult to tell which action patterns in the underlying consumer histories h

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7625101413644

Tags
#R #debugger #shiny
Question
Unlike breakpoints, [...]() works everywhere, so it’s suitable for use in any code invoked by your Shiny app.
browser

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Unlike breakpoints, browser() works everywhere, so it’s suitable for use in any code invoked by your Shiny app.

#### Original toplevel document

Debuging shiny applications
The browser() statement is another useful debugging tool. It acts like a breakpoint–when evaluated, it halts execution and enters the debugger. You can add it anywhere an R expression is valid. <span>Unlike breakpoints, browser() works everywhere, so it’s suitable for use in any code invoked by your Shiny app. You can also invoke browser() conditionally to create conditional breakpoints; for instance: if (input\$bins > 50) browser() The downside of browser() is that you need to re-run your

#### Flashcard 7625108229388

Tags
#tensorflow #tensorflow-certificate
Question
changeable_tensor = tf.Variable([10, 7])

changeable_tensor[0] = 77

Output:
TypeError: 'ResourceVariable' object does not support item assignment

changeable_tensor[0].assign(77)

Output:
<tf.Variable 'UnreadVariable' shape=([...]) dtype=int32, numpy=array([77,  7], dtype=int32)>

2,

status measured difficulty not learned 37% [default] 0

Tensorflow basics
([10, 7]) changeable_tensor[0] = 77 Output: TypeError: 'ResourceVariable' object does not support item assignment changeable_tensor[0].assign(77) Output: <tf.Variable 'UnreadVariable' shape=(<span>2,) dtype=int32, numpy=array([77, 7], dtype=int32)> <span>

#### Annotation 7625109802252

 #tensorflow #tensorflow-certificate tf.ones([10, 7]) 

#### Flashcard 7625111899404

Tags
#tensorflow #tensorflow-certificate
Question
tf.[...]([10, 7])

<tf.Tensor: shape=(10, 7), dtype=float32, numpy=
array([[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1.]], dtype=float32)>

ones

status measured difficulty not learned 37% [default] 0

Open it
tf.ones([10, 7]) <tf.Tensor: shape=(10, 7), dtype=float32, numpy= array([[1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1.

#### Annotation 7625113472268

 #tensorflow #tensorflow-certificate # Create 4-rank tensor (the same as 4 dimensions) A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5)) A

#### Flashcard 7625115307276

Tags
#tensorflow #tensorflow-certificate
Question

# Create 4-rank tensor (the same as 4 [...])

A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5))

A

dimensions

status measured difficulty not learned 37% [default] 0

Open it
# Create 4-rank tensor (the same as 4 dimensions) A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5)) A

#### Flashcard 7625116880140

Tags
#tensorflow #tensorflow-certificate
Question

# Create 4-[...] tensor (the same as 4 dimensions)

A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5))

A