Edited, memorised or added to reading queue

on 17-Dec-2025 (Wed)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#recurrent-neural-networks #rnn
We show how the proposed deep learning model improves on established models both in terms of individual-level accuracy and overall cohort-level bias.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
We show how the proposed deep learning model improves on established models both in terms of individual-level accuracy and overall cohort-level bias. It also helps managers in capturing seasonal trends and other forms of purchase dynamics that are important to detect in a timely manner for the purpose of proactive customer-base manag

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7779611184396

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question

RNN details

We use a simple RNN architecture with a single LSTM layer and ten-dimensional cell states. The hidden state at the last time-step is combined with binary non-history features to make the final prediction in a logistic layer. Thus, the final prediction of the RNN is linear in the learned and non-history features. The non-history features describe time, weekday, and behavioral gender and are also provided to the baseline methods. Instead of absolute timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t. Furthermore, the difference between the last event x T and the [...] (the session start) is provided to the final prediction layer

Answer
prediction time

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
of absolute timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t. Furthermore, the difference between the last event x T and the <span>prediction time (the session start) is provided to the final prediction layer <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7779612757260

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question

RNN details

We use a simple RNN architecture with a single LSTM layer and ten-dimensional cell states. The hidden state at the last time-step is combined with binary non-history features to make the final prediction in a logistic layer. Thus, the final prediction of the RNN is linear in the learned and non-history features. The non-history features describe time, weekday, and behavioral gender and are also provided to the baseline methods. Instead of absolute timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t. Furthermore, the difference between the last event x T and the prediction time (the [...]) is provided to the final prediction layer

Answer
session start

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
s, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t. Furthermore, the difference between the last event x T and the prediction time (the <span>session start) is provided to the final prediction layer <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7779613805836

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question

RNN details

We use a simple RNN architecture with a single LSTM layer and ten-dimensional cell states. The hidden state at the last time-step is combined with binary non-history features to make the final prediction in a logistic layer. Thus, the final prediction of the RNN is linear in the learned and non-history features. The non-history features describe time, weekday, and behavioral gender and are also provided to the baseline methods. Instead of absolute timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t. Furthermore, the [...] between the last event x T and the prediction time (the session start) is provided to the final prediction layer

Answer
difference

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
lso provided to the baseline methods. Instead of absolute timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t. Furthermore, the <span>difference between the last event x T and the prediction time (the session start) is provided to the final prediction layer <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7779614854412

Tags
#has-images #tensorflow #tensorflow-certificate
[unknown IMAGE 7626420784396]
Question

How we can improve model (in the particular stage of the process)?

# 1. Creating model: add more layers, increase numbers of [...] neurons, change activation functions

Answer
hidden

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
How we can improve model (in the particular stage of the process)? # 1. Creating model: add more layers, increase numbers of hidden neurons, change activation functions

Original toplevel document

TfC 01 regression
#### How we can improve model # 1. Creating model: add more layers, increase numbers of hidden neurons, change activation functions # 2. Compiling: change optimizer or its parameters (eg. learning rate) # 3. Fitting: more epochs, more data ### How? # from smaller model to larger model Evaluating models Typical workflow: build a model -> fit it -> evaulate -> tweak -> fit > evaluate -> .... Building model: experiment Evaluation model: visualize What







Flashcard 7779616427276

Tags
#has-images #recurrent-neural-networks #rnn
[unknown IMAGE 7101511240972]
Question
To forecast future customer behavior, our model is trained using individual sequences of past transaction events, i.e., chronological accounts of a customer’s lifetime. The example in Table 2 describes one such customer’s transaction history over [...] consecutive discrete time periods
Answer
seven

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ained using individual sequences of past transaction events, i.e., chronological accounts of a customer’s lifetime. The example in Table 2 describes one such customer’s transaction history over <span>seven consecutive discrete time periods <span>

Original toplevel document (pdf)

cannot see any pdfs







#abm #agent-based #machine-learning #model #priority #synergistic-integration
As demonstrated by Torrens et al. [9], the individual behavior in an agent-based model can be machine-learned from samples collected at the individual-agent level.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
As demonstrated by Torrens et al. [9], the individual behavior in an agent-based model can be machine-learned from samples collected at the individual-agent level. In addition, modern ABM techniques can help in analysis through their ability to have adaptive agents in different changing environments [10]. The machine- learning-based inference mode

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7779621670156

Tags
#deep-learning #keras #lstm #python #sequence
Question
Normalization requires that you know or are able to accurately estimate the [...] observable values.
Answer
minimum and maximum

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Normalization requires that you know or are able to accurately estimate the minimum and maximum observable values.

Original toplevel document (pdf)

cannot see any pdfs