Edited, memorised or added to reading queue

on 14-Jul-2022 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#causality #statistics
The minimal building blocks of DAGs consist of chains, forks, immoralities, two unconnected nodes, and two connected nodes.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
the flow of association and causation in DAGs. We can understand this flow in general DAGs by understanding the flow in the minimal building blocks of graphs. The minimal building blocks of DAGs consist of chains (Figure 3.9a), forks (Figure 3.9b), immoralities (Figure 3.9c), two unconnected nodes (Figure 3.10), and two connected nodes (Figure 3.11)

Original toplevel document (pdf)

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority #synergistic-integration
3) Semisupervised Learning: Semisupervised learning aims to learn a better prediction as opposed to labelled data alone. For a supervised learning algorithm, labels are presented for all the observations in the dataset (i.e., with completely labeled training data), whereas for an unsupervised learning algorithm, labels are not required for the observation of the dataset. Semisupervised learning falls in between supervised or unsupervised learning algorithms. It is an approach that combines a small amount of labeled data with a large amount of unlabeled data during training when the cost of labeling work may render large, fully labeled training sets infeasible, whereas the acquisition of unlabeled data is relatively inexpensive. Generally, typical categories of semisupervised learning algorithms include the generative model method, the low-density separation method, the graph-based method, and the heuristic method. Some of the popular algorithms for semisupervised learning are summarized in Table S1 in the Supplementary Material
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
RNNs can be applied to predict future consumer behavior in regression and classification settings, for example, to predict interest in fashion brands or consumer lifetime value. We focus on predicting the probability P (ou | xu 1 , . . . , x u T ) of a consumer u to place an order ou , which we model as a binary classification problem. For instance, we could be interested in orders in general or of specific products. The resulting probability estimates can be used in a recommender system to deliver appropriate product recommendations and webshop contents.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
In principle, one could evaluate the logistic regression model at every single time-step in the consumer history to determine the influence of individual events. However, this would involve the inefficient process of re-calculating features for every time-step. Calculations at timesteps t and t − 1 would be highly redundant: features at t represent the complete history until t and not only what happened in between t − 1 and t. Generally speaking, explaining the predictions of vector-based methods is more difficult than often as- sumed. This holds even for linear models like logistic regression. Features are often preprocessed, for example to binarize counts (Sec. 2). Furthermore, they are typically strongly correlated, making it troublesome to interpret individual coefficients [6]. Table 3 shows exemplary features weights in a logistic regression model used to predict order probabilities. If hundreds of features are utilized and are correlated and preprocessed, explaining the impact of consumer actions becomes a complex and confusing task
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
We have proposed an approach to apply RNNs to predict future consumer behavior in e-commerce. Consumer behavior is inherently sequential which makes RNNs a perfect fit. We are employing RNNs in production now which offers significant advantages over existing methods: reduced feature engineering; improved empirical performance; and better prediction explanations. In the future, predictions on the level of products and individual tastes will be in our focus, enabling sophisticated recommendation products. This will require richer input descriptions at individual time-steps. Likewise, more sophisticated RNN architectures will be promising for future research
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority #synergistic-integration
ABM is a bottom-up modeling approach in which every agent of the system, theoretically, can be simulated to any level of granularity.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
ABM is a bottom-up modeling approach in which every agent of the system, theoretically, can be simulated to any level of granularity. Each agent can have corresponding state variables that represent its internal states, and it can also have its unique representation of interaction with other agents and the associated

Original toplevel document (pdf)

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority #synergistic-integration
Semisupervised learning falls in between supervised or unsupervised learning algorithms. It is an approach that combines a small amount of labeled data with a large amount of unlabeled data during training when the cost of labeling work may render large, fully labeled training sets infeasible, whereas the acquisition of unlabeled data is relatively inexpensive.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
or all the observations in the dataset (i.e., with completely labeled training data), whereas for an unsupervised learning algorithm, labels are not required for the observation of the dataset. <span>Semisupervised learning falls in between supervised or unsupervised learning algorithms. It is an approach that combines a small amount of labeled data with a large amount of unlabeled data during training when the cost of labeling work may render large, fully labeled training sets infeasible, whereas the acquisition of unlabeled data is relatively inexpensive. Generally, typical categories of semisupervised learning algorithms include the generative model method, the low-density separation method, the graph-based method, and the heuristic met

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7104137399564

Tags
#abm #agent-based #machine-learning #model #priority #synergistic-integration
Question
The types of ML algorithms differ depending on the approaches they use, the type of input and output data, and the type of problem to be solved. A common way to classify them is based on their purpose for learning: supervised learning, unsupervised learning, semisupervised learning, and [...]
Answer
RL

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
output data, and the type of problem to be solved. A common way to classify them is based on their purpose for learning: supervised learning, unsupervised learning, semisupervised learning, and <span>RL <span>

Original toplevel document (pdf)

cannot see any pdfs







3. Ambiguous: Surveys often fail to reveal the root causes of customer sentiment. In fact, scores can vary based on many outside factors, including geographical bias and industry shocks, making it difficult to perform reliable root-cause analysis using surveys alone
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
3. Ambiguous: Surveys often fail to reveal the root causes of customer sentiment. In fact, scores can vary based on many outside factors, including geographical bias and industry shocks, making it difficult to perform reliable root-cause analysis using surveys alone. Only 16 percent of CX leaders said that surveys provide them with granular- enough data to address the root causes of CX performance

Original toplevel document (pdf)

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority #synergistic-integration
In a bottom-up approach, the individual elements of the system are first specified in detail with a predefined rule of behaviors and agent interactions
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
In a bottom-up approach, the individual elements of the system are first specified in detail with a predefined rule of behaviors and agent interactions; these elements are then linked in different levels until a sound top-level structure is generated

Original toplevel document (pdf)

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Consumer behavior is inherently sequential which makes RNNs a perfect fit.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
We have proposed an approach to apply RNNs to predict future consumer behavior in e-commerce. Consumer behavior is inherently sequential which makes RNNs a perfect fit. We are employing RNNs in production now which offers significant advantages over existing methods: reduced feature engineering; improved empirical performance; and better prediction exp

Original toplevel document (pdf)

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data
Consumer behavior is inherently sequential which makes RNNs a perfect fit. We are employing RNNs in production now which offers significant advantages over existing methods: reduced feature engineering; improved empirical performance; and better prediction explanations
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
We have proposed an approach to apply RNNs to predict future consumer behavior in e-commerce. Consumer behavior is inherently sequential which makes RNNs a perfect fit. We are employing RNNs in production now which offers significant advantages over existing methods: reduced feature engineering; improved empirical performance; and better prediction explanations. In the future, predictions on the level of products and individual tastes will be in our focus, enabling sophisticated recommendation products. This will require richer input descripti

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7104148147468

Tags
#causality #statistics
Question
The minimal building blocks of DAGs consist of chains, forks, immoralities, two unconnected nodes, and [...] nodes.
Answer
two connected

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The minimal building blocks of DAGs consist of chains, forks, immoralities, two unconnected nodes, and two connected nodes.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7104151293196

Tags
#abm #agent-based #machine-learning #model #priority
Question
we expanded the framework to an iterative process, thereby increasing its scope to systems that cannot be explored well using [...] agent decisions
Answer
random

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
we expanded the framework to an iterative process, thereby increasing its scope to systems that cannot be explored well using random agent decisions

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence

6.4 Fit the Model

We can now fit the model on example sequences. The code we developed for the echo sequence prediction problem generates random sequences. We could generate a large number of example sequences and pass them to the model’s fit() function. The dataset would be loaded into memory, training would be fast, and we could experiment with varied number of epochs vs dataset size and number of batches. A simpler approach is to manage the training process manually where one training sample is generated and used to update the model and any internal state is cleared. The number of epochs is the number of iterations of generating samples and essentially the batch size is 1 sample. Below is an example of fitting the model for 10,000 epochs found with a little trial and error.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

7.1 The Stacked LSTM

The Stacked LSTM is a model that has multiple hidden LSTM layers where each layer contains multiple memory cells. We will refer to it as a Stacked LSTM here to differentiate it from the unstacked LSTM (Vanilla LSTM) and a variety of other extensions to the basic LSTM model.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
[the success of deep neural networks] is commonly attributed to the hierarchy that is introduced due to the several layers. Each layer processes some part of the task we wish to solve, and passes it on to the next. In this sense, the DNN can be seen as a processing pipeline, in which each layer solves a part of the task before passing it on to the next, until finally the last layer provides the output
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Additional hidden layers can be added to a Multilayer Perceptron neural network to make it deeper. The additional hidden layers are understood to recombine the learned representation from prior layers and create new representations at high levels of abstraction. For example, from lines to shapes to objects. A sufficiently large single hidden layer Multilayer Perceptron can be used to approximate most functions. Increasing the depth of the network provides an alternate solution that requires fewer neurons and trains faster. Ultimately, adding depth it is a type of representational optimization. Deep learning is built around a hypothesis that a deep, hierarchical model can be exponentially more efficient at representing some functions than a shallow one. — How to Construct Deep Recurrent Neural Networks, 2013
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Given that LSTMs operate on sequence data, it means that the addition of layers adds levels of abstraction of input observations over time. In effect, chunking observations over time or representing the problem at different time scales. ... building a deep RNN by stacking multiple recurrent hidden states on top of each other. This approach potentially allows the hidden state at each level to operate at different timescale — How to Construct Deep Recurrent Neural Networks, 2013
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
7.1. The Stacked LSTM 79 RNNs are inherently deep in time, since their hidden state is a function of all previous hidden states. The question that inspired this paper was whether RNNs could also benefit from depth in space; that is from stacking multiple recurrent hidden layers on top of each other, just as feedforward layers are stacked in conventional deep networks. — Speech Recognition With Deep Recurrent Neural Networks, 2013 In the same work, they found that the depth of the network was more important than the number of memory cells in a given layer to model skill. Stacked LSTMs are now a stable technique for challenging sequence prediction problems. A Stacked LSTM architecture can be defined as an LSTM model comprised of multiple LSTM layers. An LSTM layer above provides a sequence output rather than a single value output to the LSTM layer below. Specifically, one output per input time step, rather than one output time step for all input time steps.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs