Edited, memorised or added to reading queue

on 27-Jul-2022 (Wed)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#recurrent-neural-networks #rnn
softmax and the output of the softmax layer at any given time step t is a k-tuple for the probability distribution across the k neurons of the output layer. We set the number of neurons k in the softmax layer to reflect the transaction counts observed across all individuals in the training data: as is the case with any ‘‘forward-looking” approach, the model can only learn from events that are observed at some point during estimation; i.e., if in the calibration period individuals only make between zero and three transactions during any of the discrete time periods, then a softmax layer with four neurons is sufficient: the neurons’ respective outputs represent the inferred probability of zero, one, two and three transactions. 10 With each vector read as input, the model’s training objective is to predict the target variable, which in this self- supervised training setup is just the input variable shifted by a single time step. Using the example from Table 2, given the sequence of input vectors starting with the first week of January, i.e. [1,January,1,F,0], [0,January,2,F,0], [1,Jan- uary,3,F,1] ..., we train the model to output the target sequence 0,1,1,...equal to the rightmost column in Table 2. With each input vector processed by the network, the internal memory component is trained to update a real-valued cell state vector to reflect the sequence of events thus far. We estimate the model parameters by minimizing the stochastic mini-batch 11 error between the predicted output and the actual target values. At the time of prediction, we fix the model parameters in the form of weights and biases between the individual neurons in the deep neural network, but the cell state vector built into the structure of the LSTM ‘‘memory” component is nonetheless being updated at each step with parts of the latest input, which helps the model learn very long-term transaction patterns. Each prediction is generated by drawing a sample from the multinomial output distribution calculated by the bottom network layer; our model therefore does not produce point or interval estimates, each output is a simulated draw 12 . Each time a draw from this multinomial distribution is made, the observation is fed back into the model as the new transaction variable input in order to generate the following time step prediction, and so on, until we create a sequence of predicted time steps of desired length. This so-called autoregressive mechanism in which an output value always becomes the new input is illustrated in Fig. 2 with the dotted arrow bending from the output layer back to the input. Fig. 2 also shows that we feed each input first into a dedicated embedding (Mikolov, Sutskever, Chen, Corrado, & Dean, 2013) 13 layer. Using embeddings is not critical to our approach, but by creating efficient and dense (real-valued) vector representations of all variables it already serves to better separate useful signals from noise and to condense the information even before it reaches the memory component (see also Chamberlain, Cardoso, & A (2017) for a similar approach). It should be highlighted that this setup of inputs with associated embeddings is completely flexible and allows for the inclusion of any time-varying context or customer-specific static variables by simply adding more inputs together with their respective embedding layers
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
agent-based models are often not developed explicitly for prediction, and are generally not validated as such. We therefore present a novel data-driven agent-based modeling framework, in which individual behavior model is learned by machine learning techniques, deployed in multi-agent systems and validated using a holdout sequence of collective adoption decisions.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#recurrent-neural-networks #rnn
softmax and the output of the softmax layer at any given time step t is a k-tuple for the probability distribution across the k neurons of the output layer. We set the number of neurons k in the softmax layer to reflect the transaction counts observed across all individuals in the training data: as is the case with any ‘‘forward-looking” approach, the model can only learn from events that are observed at some point during estimation; i.e., if in the calibration period individuals only make between zero and three transactions during any of the discrete time periods, then a softmax layer with four neurons is sufficient: the neurons’ respective outputs represent the inferred probability of zero, one, two and three transactions
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
softmax and the output of the softmax layer at any given time step t is a k-tuple for the probability distribution across the k neurons of the output layer. We set the number of neurons k in the softmax layer to reflect the transaction counts observed across all individuals in the training data: as is the case with any ‘‘forward-looking” approach, the model can only learn from events that are observed at some point during estimation; i.e., if in the calibration period individuals only make between zero and three transactions during any of the discrete time periods, then a softmax layer with four neurons is sufficient: the neurons’ respective outputs represent the inferred probability of zero, one, two and three transactions. 10 With each vector read as input, the model’s training objective is to predict the target variable, which in this self- supervised training setup is just the input variable shifted by

Original toplevel document (pdf)

cannot see any pdfs




#recurrent-neural-networks #rnn
The proposed approach is agnostic about time-varying or time-invariant covariates: Instead of adapting the data to a model, our model adapts to the data and can simply be left to leverage useful signals automatically without the need to change the model architecture or training procedure. While the incorporation of covariates is in principle possible with so-called ‘‘scoring” or regression-like models and, to a certain extent, with advanced probability models as well, our approach comes with another advantage.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
The proposed approach is agnostic about time-varying or time-invariant covariates: Instead of adapting the data to a model, our model adapts to the data and can simply be left to leverage useful signals automatically without the need to change the model architecture or training procedure. While the incorporation of covariates is in principle possible with so-called ‘‘scoring” or regression-like models and, to a certain extent, with advanced probability models as well, our approach comes with another advantage. Regression-type models and traditional ML methods are often criticized for their backward-looking properties and inefficient use of the available data (because they need to hold out the

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7107873475852

Tags
#GAN #data #sequential #synthetic
Question
In [...] data, information can be spread through many rows, like credit card transactions, and preservation of correlations between rows — the events — and columns — the variables is key
Answer
sequential

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In sequential data, information can be spread through many rows, like credit card transactions, and preservation of correlations between rows — the events — and columns — the variables is key

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7107875835148

Tags
#causality #statistics
Question

Definition 3.3 (blocked path) A path between nodes 𝑋 and 𝑌 is blocked by a ([...] empty) conditioning set 𝑍 if either of the following is true:

1. Along the path, there is a chain · · · → 𝑊 → · · · or a fork · · · ← 𝑊 → · · ·, where 𝑊 is conditioned on (𝑊 ∈ 𝑍).

2. There is a collider 𝑊 on the path that is not conditioned on ( 𝑊 ∉ 𝑍 ) and none of its descendants are conditioned on (de(𝑊) * 𝑍)

Answer
potentially

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Definition 3.3 (blocked path) A path between nodes 𝑋 and 𝑌 is blocked by a (potentially empty) conditioning set 𝑍 if either of the following is true: 1. Along the path, there is a chain · · · → 𝑊 → · · · or a fork · · · ← 𝑊 → · · ·, where 𝑊 is conditioned on (𝑊 ∈ 𝑍). 2. Th

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7107879505164

Tags
#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
Question
agent-based models are often not developed explicitly for prediction, and are generally not validated as such. We therefore present a novel data-driven agent-based modeling framework, in which [...] behavior model is learned by machine learning techniques, deployed in multi-agent systems and validated using a holdout sequence of collective adoption decisions.
Answer
individual

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
agent-based models are often not developed explicitly for prediction, and are generally not validated as such. We therefore present a novel data-driven agent-based modeling framework, in which <span>individual behavior model is learned by machine learning techniques, deployed in multi-agent systems and validated using a holdout sequence of collective adoption decisions. <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7107881340172

Tags
#DAG #causal #inference
Question
we focus on the identification and estimation of causal effects in populations, that is, numerical quantities that measure changes in the distribution of an outcome under different [...].
Answer
interventions

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
we focus on the identification and estimation of causal effects in populations, that is, numerical quantities that measure changes in the distribution of an outcome under different interventions.

Original toplevel document (pdf)

cannot see any pdfs