Edited, memorised or added to reading queue

on 22-Apr-2024 (Mon)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#feature-engineering #lstm #recurrent-neural-networks #rnn
All four customers in the figure have the same seniority (date of first purchase), recency (date of last purchase), and frequency (number of purchases). However, each of them has a visibly different transaction pattern. A response model relying exclusively on seniority, recency, and frequency would not be able to distinguish between customers who have similar features but different behavioral sequence.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 7624068041996

Tags
#tensorflow #tensorflow-certificate
Question

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import numpy as np

model = Sequential(Dense(1, input_shape=[1]))
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([1,5,12,-1,10], dtype=float)
ys = np.array([5,13,27,1,23], dtype=float)
model.fit(xs, ys, epochs=500)
model.[...](x=[15])

Answer
predict

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow basics - typical flow of model building
t_shape=[1])) model.compile(optimizer='sgd', loss='mean_squared_error') xs = np.array([1,5,12,-1,10], dtype=float) ys = np.array([5,13,27,1,23], dtype=float) model.fit(xs, ys, epochs=500) model.<span>predict(x=[15]) <span>







Flashcard 7624070139148

Tags
#tensorflow #tensorflow-certificate
Question

from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import numpy as np

model = Sequential(Dense(1, input_shape=[1]))
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([1,5,12,-1,10], dtype=float)
ys = np.array([5,13,27,1,23], dtype=float)
model.fit(xs, ys, [...]=500)
model.predict(x=[15])

Answer
epochs

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow basics - typical flow of model building
tial(Dense(1, input_shape=[1])) model.compile(optimizer='sgd', loss='mean_squared_error') xs = np.array([1,5,12,-1,10], dtype=float) ys = np.array([5,13,27,1,23], dtype=float) model.fit(xs, ys, <span>epochs=500) model.predict(x=[15]) <span>







Flashcard 7624091372812

Tags
#tensorflow #tensorflow-certificate
Question

import tensorflow as tf

#stop training after reaching accuract of 0.99
class MyCallback(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, logs={}):
    if logs.get('accuracy')>=0.99:
      print('\nAccuracy 0.99 achieved')
      [...].model.stop_training = True

Answer
self

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow - callbacks
aining after reaching accuract of 0.99 class MyCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs={}): if logs.get('accuracy')>=0.99: print('\nAccuracy 0.99 achieved') <span>self.model.stop_training = True <span>







Flashcard 7625074937100

Tags
#deep-learning #keras #lstm #python #sequence
Question
By default, the samples within an epoch are shuffled. This is a good practice when working with [...] neural networks. If you are trying to preserve state across samples, then the order of samples in the training dataset may be important and must be preserved. This can be done by setting the shuffle argument in the fit() function to False.
Answer
Multilayer Perceptron

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
By default, the samples within an epoch are shuffled. This is a good practice when working with Multilayer Perceptron neural networks. If you are trying to preserve state across samples, then the order of samples in the training dataset may be important and must be preserved. This can be done by settin

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7625078082828

Tags
#bayes #programming #r #statistics
Question
The posterior distribution also shows the uncertainty in that estimated slope, because the distribution shows the relative [...] of values across the continuum.
Answer
credibility

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The posterior distribution also shows the uncertainty in that estimated slope, because the distribution shows the relative credibility of values across the continuum.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7625093025036

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
All four customers in the figure have the same seniority (date of first purchase), recency (date of last purchase), and frequency (number of purchases). However, each of them has a visibly different [...]. A response model relying exclusively on seniority, recency, and frequency would not be able to distinguish between customers who have similar features but different behavioral sequence.
Answer
transaction pattern

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
our customers in the figure have the same seniority (date of first purchase), recency (date of last purchase), and frequency (number of purchases). However, each of them has a visibly different <span>transaction pattern. A response model relying exclusively on seniority, recency, and frequency would not be able to distinguish between customers who have similar features but different behavioral sequence

Original toplevel document (pdf)

cannot see any pdfs







#feature-engineering #lstm #recurrent-neural-networks #rnn
Feature engineering has been used broadly to refer to multiple aspects of feature creation, extraction, and transformatio
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
In machine learning, a feature refers to a variable that describes some aspect of individual data objects (Dong & Liu, 2018). Feature engineering has been used broadly to refer to multiple aspects of feature creation, extraction, and transformation. Essentially, it refers to the process of using domain knowledge to create useful features that can be fed as predictors into a model.

Original toplevel document (pdf)

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
A time window based MLP outperformed the LSTM pure-[autoregression] approach on certain time series prediction benchmarks solvable by looking at a few recent inputs only.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
A time window based MLP outperformed the LSTM pure-[autoregression] approach on certain time series prediction benchmarks solvable by looking at a few recent inputs only. Thus LSTM’s special strength, namely, to learn to remember single events for very long, unknown time periods, was not necessary

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7625099054348

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question
While preprocessing is an important tool to improve model performance, it artificially increases the [...] of the input vector. Also, the resulting binary features can be strongly correlated. Both outcomes make it difficult to tell which action patterns in the underlying consumer histories have a strong impact on the prediction outcome
Answer
dimensionality

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
While preprocessing is an important tool to improve model performance, it artificially increases the dimensionality of the input vector. Also, the resulting binary features can be strongly correlated. Both outcomes make it difficult to tell which action patterns in the underlying consumer histories h

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7625101413644

Tags
#R #debugger #shiny
Question
Unlike breakpoints, [...]() works everywhere, so it’s suitable for use in any code invoked by your Shiny app.
Answer
browser

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Unlike breakpoints, browser() works everywhere, so it’s suitable for use in any code invoked by your Shiny app.

Original toplevel document

Debuging shiny applications
The browser() statement is another useful debugging tool. It acts like a breakpoint–when evaluated, it halts execution and enters the debugger. You can add it anywhere an R expression is valid. <span>Unlike breakpoints, browser() works everywhere, so it’s suitable for use in any code invoked by your Shiny app. You can also invoke browser() conditionally to create conditional breakpoints; for instance: if (input$bins > 50) browser() The downside of browser() is that you need to re-run your







Flashcard 7625108229388

Tags
#tensorflow #tensorflow-certificate
Question
changeable_tensor = tf.Variable([10, 7])

changeable_tensor[0] = 77

Output:
TypeError: 'ResourceVariable' object does not support item assignment


changeable_tensor[0].assign(77)

Output:
<tf.Variable 'UnreadVariable' shape=([...]) dtype=int32, numpy=array([77,  7], dtype=int32)>

Answer
2,

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Tensorflow basics
([10, 7]) changeable_tensor[0] = 77 Output: TypeError: 'ResourceVariable' object does not support item assignment changeable_tensor[0].assign(77) Output: <tf.Variable 'UnreadVariable' shape=(<span>2,) dtype=int32, numpy=array([77, 7], dtype=int32)> <span>







#tensorflow #tensorflow-certificate
tf.ones([10, 7])


<tf.Tensor: shape=(10, 7), dtype=float32, numpy=
array([[1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.]], dtype=float32)>

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Flashcard 7625111899404

Tags
#tensorflow #tensorflow-certificate
Question
tf.[...]([10, 7])


<tf.Tensor: shape=(10, 7), dtype=float32, numpy=
array([[1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.],
       [1., 1., 1., 1., 1., 1., 1.]], dtype=float32)>

Answer
ones

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Open it
tf.ones([10, 7]) <tf.Tensor: shape=(10, 7), dtype=float32, numpy= array([[1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1.







#tensorflow #tensorflow-certificate

# Create 4-rank tensor (the same as 4 dimensions)

A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5))

A

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Flashcard 7625115307276

Tags
#tensorflow #tensorflow-certificate
Question

# Create 4-rank tensor (the same as 4 [...])

A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5))

A

Answer
dimensions

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Open it
# Create 4-rank tensor (the same as 4 dimensions) A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5)) A







Flashcard 7625116880140

Tags
#tensorflow #tensorflow-certificate
Question

# Create 4-[...] tensor (the same as 4 dimensions)

A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5))

A

Answer
rank

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Open it
# Create 4-rank tensor (the same as 4 dimensions) A = tf.constant(np.arange(0, 120), shape=(2, 3, 4, 5)) A