Edited, memorised or added to reading queue

on 26-Jan-2026 (Mon)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

[unknown IMAGE 7096218029324] #abm #agent-based #has-images #machine-learning #model #priority #synergistic-integration

We have a total of four scenarios to which ML can contribute.

1) Scenario 1: Microagent situational awareness learning.
2) Scenario 2: Microagent behavior interventions.
3) Scenario 3: Macrolevel emergence emulator.
4) Scenario 4: Macro ABMs decision-making, as shown in Fig. 4

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

Some examples of sequence classification problems include:

DNA Sequence Classification
Given a DNA sequence of A, C, G, and T values, predict whether the sequence is for a coding or non-coding region.

Anomaly Detection
Given a sequence of observations, predict whether the sequence is anomalous or not.

Sentiment Analysis .
Given a sequence of text such as a review or a tweet, predict whether the sentiment of the text is positive or negative

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #patterns #priority #recurrent-neural-networks #retail #simulation #synthetic-data
It is interesting to explore deeper structures of the model in auto-encoder and recursion levels. Clumpiness is another variable which can be studied as an additive to R, F, and M (i.e. RFMC) variable
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
ty number, R, F, and M). The proposed model is the first of its kind in the literature and has many opportunities for further improvement. The model can be improved by using more training data. <span>It is interesting to explore deeper structures of the model in auto- encoder and recursion levels. Clumpiness is another variable which can be studied as an additive to R, F, and M (i.e. RFMC) variables. <span>

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7792137211148

Tags
#deep-learning #keras #lstm #python #sequence
Question

1.4.1 LSTM Weights

A memory cell has weight parameters for the input, output, as well as an internal state that is built up through exposure to input time steps.

...

Internal State.

Internal state used in the calculation of the output for [...] step

Answer
this time

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
arameters for the input, output, as well as an internal state that is built up through exposure to input time steps. ... Internal State. Internal state used in the calculation of the output for <span>this time step <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7792139046156

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #retail #simulation #synthetic-data
Question
Given that every demand planner works on a narrow segment of item portfolio, there is a high variability in choices that different planners recommend. Additionally, the demand planners might not get enough opportunities to discuss their views and insights over their recommendations. Hence, subtle effects like [...] [21], and item affinity remain unaccounted for. Such inefficiencies lead to a gap between consumer needs and item availability, resulting in the loss of business opportunities in terms of consumer churn, and out-of-stock and excess inventory
Answer
cannibalization

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
hat different planners recommend. Additionally, the demand planners might not get enough opportunities to discuss their views and insights over their recommendations. Hence, subtle effects like <span>cannibalization [21], and item affinity remain unaccounted for. Such inefficiencies lead to a gap between consumer needs and item availability, resulting in the loss of business opportunities in terms

Original toplevel document (pdf)

cannot see any pdfs







#ML-engineering #ML_in_Action #learning #machine #software-engineering
By embracing the concepts of ML engineering and following the road of effective project work, the end goal of getting a useful modeling solution can be shorter, far cheaper, and have a much higher probability of succeeding than if you just wing it and hope for the best
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
effective way to solve those business problems that we’re all tasked with as data science (DS) practitioners is to follow a process designed around preventing rework, confusion, and complexity. <span>By embracing the concepts of ML engineering and following the road of effective project work, the end goal of getting a useful modeling solution can be shorter, far cheaper, and have a much higher probability of succeeding than if you just wing it and hope for the best <span>

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7792148745484

Tags
#deep-learning #has-images #keras #lstm #python #sequence
[unknown IMAGE 7104071601420]
Question
When stateful LSTM layers are used, you must also define the batch size as part of the input shape in the definition of the network by setting the batch input shape argument and the batch size must be a [...] of the number of samples in the training dataset.
Answer
factor

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
eful LSTM layers are used, you must also define the batch size as part of the input shape in the definition of the network by setting the batch input shape argument and the batch size must be a <span>factor of the number of samples in the training dataset. <span>

Original toplevel document (pdf)

cannot see any pdfs