Edited, memorised or added to reading queue

on 13-Jul-2022 (Wed)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#data #synthetic
Traditional methods of synthetic data generation use techniques that do not intend to replicate important statistical properties of the original data. Properties such as the distribution, the patterns or the correlation between variables, are often omitted. Moreover, most of the existing tools and approaches require a great deal of user-defined rules and do not make use of advanced techniques like Machine Learning or Deep Learning. While Machine Learning is an innovative area of Artificial Intelligence and Computer Science that uses statistical techniques to give computers the ability to learn from data, Deep Learning is a closely related field based on learning data representations, which may serve useful for the task of synthetic data generation
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority #synergistic-integration
Abstract— Agent-based modeling (ABM) involves developing models in which agents make adaptive decisions in a changing environment. Machine-learning (ML) based inference models can improve sequential decision-making by learning agents’ behavioural patterns. With the aid of ML, this emerging area can extend traditional agent-based schemes that hardcode agents’ behavioral rules into an adaptive model. Even though there are plenty of studies that apply ML in ABMs, the generalized applicable scenarios, frameworks, and procedures for implementations are not well addressed. In this article, we provide a comprehensive review of applying ML in ABM based on four major scenarios, i.e., microagent-level situational awareness learning, microagent-level behavior intervention, macro-ABM-level emulator, and sequential decision-making. For these four scenarios, the related algorithms, frameworks, procedures of implementations, and multidisciplinary applications are thoroughly investigated. We also discuss how ML can improve prediction in ABMs by trading off the variance and bias and how ML can improve the sequential decision-making of microagent and macrolevel policymakers via a mechanism of reinforced behavioural intervention. At the end of this article, future perspectives of applying ML in ABMs are discussed with respect to data acquisition and quality issues, the possible solution of solving the convergence problem of reinforcement learning, interpretable ML applications, and bounded rationality of ABM. Index Terms— Agent-based modeling (ABM), behavioral intervention, machine learning (ML), reinforcement learning (RL).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based-model #data #simudyne #synthetic

Simulation models

This article outlines mechanisms to generate synthetic market prices. Agents that trade with varying behaviors are used to simulate alternative price paths of assets to create a variety of ‘what-if’ scenarios. The synthetic market prices are then compared to the real market prices using statistical techniques

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
While preprocessing is an important tool to improve model performance, it artificially increases the dimensionality of the input vector. Also, the resulting binary features can be strongly correlated. Both outcomes make it difficult to tell which action patterns in the underlying consumer histories have a strong impact on the prediction outcome
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data

RNN details

We use a simple RNN architecture with a single LSTM layer and ten-dimensional cell states. The hidden state at the last time-step is combined with binary non-history features to make the final prediction in a logistic layer. Thus, the final prediction of the RNN is linear in the learned and non-history features. The non-history features describe time, weekday, and behavioral gender and are also provided to the baseline methods. Instead of absolute timestamps, the time differences ∆(x t−1 , x t ) to the previous inputs x t−1 are fed to the RNN at each time- step t. Furthermore, the difference between the last event x T and the prediction time (the session start) is provided to the final prediction layer

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
All models are trained to minimize negative log-likelihood (NLL).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
As machine learning models become ubiquitous in our everyday lives, demand for explaining their predictions is growing [5, 16, 14]. In the context of behaviour prediction, we want to understand how previous consumer actions influence model predictions: How does order probability change when products are put into the cart? Does it decrease significantly if a consumer does not return to a webshop for two days? Answers to these questions are consumer-specific; they depend on the complete consumer history
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 7104009997580

Tags
#abm #agent-based #machine-learning #model #priority #synergistic-integration
Question
Proper modeling of individual behaviors and the interactions among individuals are essential for the [...] approach, which leads to one of the most important topics in this approach to modeling methodology—the agent-based modeling (ABM).
Answer
bottom-up

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Proper modeling of individual behaviors and the interactions among individuals are essential for the bottom-up approach, which leads to one of the most important topics in bottom-up modeling methodology—the agent-based modeling (ABM).

Original toplevel document (pdf)

cannot see any pdfs







#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Consumer behaviour in e-commerce can be described by sequences of interactions with a webshop. We show that recurrent neural networks (RNNs) are a natural fit for modelling and predicting consumer behaviour.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Abstract Consumer behaviour in e-commerce can be described by sequences of interactions with a webshop. We show that recurrent neural networks (RNNs) are a natural fit for modelling and predicting consumer behaviour. In multiple aspects, RNNs offer advantages over existing methods that are relevant for real-world production systems. Applying RNNs directly to sequences of consumer actions yields the

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7104018648332

Tags
#abm #agent-based #machine-learning #model #priority
Question

Sugarscape model:

In order to find the rules for agent behaviour, we proceed as follows. In the Initialization phase, agents are positioned randomly on the grid. Their input, i.e. the [...] they have access to is defined: The agents can observe the amount of sugar on their patch and the amount of sugar on each of the 4 neighbouring patches. In addition, they can also see the number of agents on their current patch and on each neighbouring patch. The score, which is used to determine if a decision was good or not, is the amount of sugar they gathered in their turn. The decision the agent faces is which of the 5 possible actions it should perform: remaining stationary, moving north, moving south, moving east or moving west

Answer
information

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
an> Sugarscape model: In order to find the rules for agent behaviour, we proceed as follows. In the Initialization phase, agents are positioned randomly on the grid. Their input, i.e. the information they have access to is defined: The agents can observe the amount of sugar on their patch and the amount of sugar on each of the 4 neighbouring patches. In addition, they can also see t

Original toplevel document (pdf)

cannot see any pdfs







#abm #agent-based #machine-learning #model #priority #synergistic-integration
With the aid of ML, this emerging area can extend traditional agent-based schemes that hardcode agents’ behavioral rules into an adaptive model.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
in which agents make adaptive decisions in a changing environment. Machine-learning (ML) based inference models can improve sequential decision-making by learning agents’ behavioural patterns. <span>With the aid of ML, this emerging area can extend traditional agent-based schemes that hardcode agents’ behavioral rules into an adaptive model. Even though there are plenty of studies that apply ML in ABMs, the generalized applicable scenarios, frameworks, and procedures for implementations are not well addressed. In this artic

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7104023366924

Tags
#causality #statistics
Question
Causal graphs are special in that we additionally assume that the edges have [...] (causal edges assumption, Assumption 3.3)
Answer
causal meaning

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Causal graphs are special in that we additionally assume that the edges have causal meaning (causal edges assumption, Assumption 3.3)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7104024939788

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question
We apply RNNs [...(how? in what way?)] to series of captured consumer actions. RNNs maintain a latent state that is updated with each action. RNNs are trained to detect and preserve the predictive signals in the consumer histories.
Answer
directly

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
We apply RNNs directly to series of captured consumer actions. RNNs maintain a latent state that is updated with each action. RNNs are trained to detect and preserve the predictive signals in the consumer his

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7104028347660

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question
All models are trained to minimize [...] (NLL).
Answer
negative log-likelihood

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
All models are trained to minimize negative log-likelihood (NLL).

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7104030182668

Tags
#causality #statistics
Question
More generally, the potential outcome [...] denotes what your outcome would be, if you were to take treatment 𝑡
Answer
𝑌(𝑡)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7104033328396

Tags
#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Question
While [...] is an important tool to improve model performance, it artificially increases the dimensionality of the input vector. Also, the resulting binary features can be strongly correlated. Both outcomes make it difficult to tell which action patterns in the underlying consumer histories have a strong impact on the prediction outcome
Answer
preprocessing (or feature engineering)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
While preprocessing is an important tool to improve model performance, it artificially increases the dimensionality of the input vector. Also, the resulting binary features can be strongly correlated. Both

Original toplevel document (pdf)

cannot see any pdfs







#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Vector-based machine learning methods like logistic regression take vectors f = (f 1 , . . . , f n ) of fixed length n as inputs. Applying these methods on consumer histories of arbitrary length requires feature engineering
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Vector-based machine learning methods like logistic regression take vectors f = (f 1 , . . . , f n ) of fixed length n as inputs. Applying these methods on consumer histories of arbitrary length requires feature engineering: a fixed set of identifiers f i has to be designed to capture the essence of an individual consumer history. Only signals that are encoded in the feature vector can be picked up by the

Original toplevel document (pdf)

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Technically, in time series forecasting terminology the current time ( t ) and future times ( t+1 , t+n ) are forecast times and past observations ( t-1 , t-n ) are used to make forecasts. We can see how positive and negative shifts can be used to create a new DataFrame from a time series with sequences of input and output patterns for a supervised learning problem. This permits not only classical X -> y prediction, but also X -> Y where both input and output can be sequences. Further, the shift function also works on so-called multivariate time series problems. That is where instead of having one set of observations for a time series, we have multiple (e.g. temperature and pressure). All variates in the time series can be shifted forward or backward to create multivariate input and output sequences
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

Chapter 4 How to Develop LSTMs in Keras

4.0.1 Lesson

Goal The goal of this lesson is to understand how to define, fit, and evaluate LSTM models using the Keras deep learning library in Python.

After completing this lesson, you will know:
- How to define an LSTM model, including how to reshape your data for the required 3D input.
- How to fit and evaluate your LSTM model and use it to make predictions on new data.
- How to take fine-grained control over the internal state in the model and when it is reset.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

For example, we can define an LSTM hidden layer with 2 memory cells followed by a Dense output layer with 1 neuron as follows:

model = Sequential()
model.add(LSTM(2))
model.add(Dense(1))

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

The first hidden layer in the network must define the number of inputs to expect, e.g. the shape of the input layer.

Input must be three-dimensional, comprised of samples, time steps, and features in that order.

Samples. These are the rows in your data. One sample may be one sequence.

Time steps. These are the past observations for a feature, such as lag variables.

Features. These are columns in your data.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 7104054824204] #deep-learning #has-images #keras #lstm #python #sequence
You can specify the input shape argument that expects a tuple containing the number of time steps and the number of features. For example, if we had two time steps and one feature for a univariate sequence with two lag observations per row, it would be specified as on listing 4.5
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

The choice of activation function is most important for the output layer as it will define the format that predictions will take. For example, below are some common predictive modeling problem types and the structure and standard activation function that you can use in the output layer:

Regression: Linear activation function, or linear , and the number of neurons matching the number of outputs. This is the default activation function used for neurons in the Dense layer.

Binary Classification (2 class) : Logistic activation function, or sigmoid , and one neuron the output layer.

Multiclass Classification (> 2 class) : Softmax activation function, or softmax , and one output neuron per class value, assuming a one hot encoded output pattern.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
The backpropagation algorithm requires that the network be trained for a specified number of epochs or exposures to all sequences in the training dataset. Each epoch can be partitioned into groups of input-output pattern pairs called batches. This defines the number of patterns that the network is exposed to before the weights are updated within an epoch. It is also an efficiency optimization, ensuring that not too many input patterns are loaded into memory at a time.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Epoch : One pass through all samples in the training dataset and updating the network weights. LSTMs may be trained for tens, hundreds, or thousands of epochs.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Batch : A pass through a subset of samples in the training dataset after which the network weights are updated. One epoch is comprised of one or more batches
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Below are some common configurations for the batch size: batch size=1 : Weights are updated after each sample and the procedure is called stochas- tic gradient descent. batch size=32 : Weights are updated after a specified number of samples and the procedure is called mini-batch gradient descent. Common values are 32, 64, and 128, tailored to the desired efficiency and rate of model updates. If the batch size is not a factor of the number of samples in one epoch, then an additional batch size of the left over samples is run at the end of the epoch. batch size=n : Where n is the number of samples in the training dataset. Weights are updated at the end of each epoch and the procedure is called batch gradient descent
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Mini-batch gradient descent with a batch size of 32 is a common configuration for LSTMs.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
The predictions will be returned in the format provided by the output layer of the network. In the case of a regression problem, these predictions may be in the format of the problem directly, provided by a linear activation function. For a binary classification problem, the predictions may be an array of probabilities for the first class that can be converted to a 1 or 0 by rounding. For a multiclass classification problem, the results may be in the form of an array of probabilities (assuming a one hot encoded output variable) that may need to be converted to a single class output prediction using the argmax() NumPy function. Alternately, for classification problems, we can use the predict classes() function that will automatically convert uncrisp predictions to crisp integer class values.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 7104071601420] #deep-learning #has-images #keras #lstm #python #sequence
Keras provides flexibility to decouple the resetting of internal state from updates to network weights by defining an LSTM layer as stateful. This can be done by setting the stateful argument on the LSTM layer to True . When stateful LSTM layers are used, you must also define the batch size as part of the input shape in the definition of the network by setting the batch input shape argument and the batch size must be a factor of the number of samples in the training dataset. The batch input shape argument requires a 3-dimensional tuple defined as batch size, time steps, and features. For example, we can define a stateful LSTM to be trained on a training dataset with 100 samples, a batch size of 10, and 5 time steps for 1 feature, as follows
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
The internal state in LSTM layers is also accumulated when evaluating a network and when making predictions. Therefore, if you are using a stateful LSTM, you must reset state after evaluating the network on a validation dataset or after making predictions
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
By default, the samples within an epoch are shuffled. This is a good practice when working with Multilayer Perceptron neural networks. If you are trying to preserve state across samples, then the order of samples in the training dataset may be important and must be preserved. This can be done by setting the shuffle argument in the fit() function to False.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

To make this more concrete, below are a 3 common examples for managing state:

  • A prediction is made at the end of each sequence and sequences are independent. State should be reset after each sequence by setting the batch size to 1.
  • A long sequence was split into multiple subsequences (many samples each with many time steps). State should be reset after the network has been exposed to the entire sequence by making the LSTM stateful, turning off the shuffling of subsequences, and resetting the state after each epoch.
  • A very long sequence was split into multiple subsequences (many samples each with many time steps). Training efficiency is more important than the influence of long-term internal state and a batch size of 128 samples was used, after which network weights are updated and state reset.

I would encourage you to brainstorm many different framings of your sequence prediction problem and network configurations, test and select those models that appear most promising with regard to prediction error

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
4.7.3 Tips for LSTM Input This section lists some final tips to help you when preparing your input data for LSTMs. The LSTM input layer must be 3D. The meaning of the 3 input dimensions are: samples, time steps and features. The LSTM input layer is defined by the input shape argument on the first hidden layer. The input shape argument takes a tuple of two values that define the number of time steps and features. The number of samples is assumed to be 1 or more. The reshape() function on NumPy arrays can be used to reshape your 1D or 2D data to be 3D. The reshape() function takes a tuple as an argument that defines the new shape
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 7104082873612] #deep-learning #has-images #keras #lstm #python #sequence
LSTMs work by learning a function ( f(...) ) that maps input sequence values ( X ) onto output sequence values (y)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
The learned mapping function is static and may be thought of as a program that takes input variables and uses internal variables. Internal variables are represented by an internal state maintained by the network and built up or accumulated over each value in the input sequence. The static mapping function may be defined with a different number of inputs or outputs. Understanding this important detail is the focus of this lesson
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
If the number of input and output time steps vary, then an Encoder-Decoder architecture can be used. The input time steps are mapped to a fixed sized internal representation of the sequence, then this vector is used as input to producing each time step in the output sequence
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




___ are incredibly important as biological catalysts.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs