Edited, memorised or added to reading queue

on 28-Nov-2024 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7642468191500

Tags
#causality #statistics
Question
Regular Bayesian networks are purely [...] models, so we can only talk about the flow of association in Bayesian networks.
Answer
statistical

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Regular Bayesian networks are purely statistical models, so we can only talk about the flow of association in Bayesian networks.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7642474745100

Tags
#causality #statistics
Question
We will denote [...] of ๐‘‹ by de(๐‘‹)
Answer
descendants

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
We will denote descendants of ๐‘‹ by de(๐‘‹)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7642481036556

Tags
#Data #GAN #reading #synthetic
Question
In generating synthesised data, normally we use the finest [...]. For instance, order_id would represent a store managing orders, or person_id could represent a population.
Answer
granularity

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In generating synthesised data, normally we use the finest granularity. For instance, order_id would represent a store managing orders, or person_id could represent a population.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7642950536460

Tags
#bayesian #stan
Question
The Stan development crew has made it easy to interactively explore diagnostics via the shinystan package, and one should do so with each model. In addition, there are other diagnostics available in other packages like loo and [...].
Answer
posterior

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Stan - diagnostic packages
e it easy to interactively explore diagnostics via the shinystan package, and one should do so with each model. In addition, there are other diagnostics available in other packages like loo and <span>posterior. <span>







#feature-engineering #lstm #recurrent-neural-networks #rnn
The LSTM neural network, which we introduce next, is a kind of RNN that has been modified to effectively capture long-term dependencies in the data
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
ly. Hence, the effects of marketing actions tend to carry- over into numerous subsequent periods (Lilien, Rangaswamy, & De Bruyn, 2013; Schweidel & Knox, 2013; Van Diepen et al., 2009). <span>The LSTM neural network, which we introduce next, is a kind of RNN that has been modified to effectively capture long-term dependencies in the data <span>

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7668614696204

Question
The algorithms generate predictive scores for each customer based on journey features. These scores allow the company to predict individual customer satisfaction and value outcomes such as revenue, loyalty, and cost to serve. More broadly, they allow CX leaders to assess the [...] for particular CX investments and directly tie CX initiatives to business outcomes
Answer
ROI

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
res. These scores allow the company to predict individual customer satisfaction and value outcomes such as revenue, loyalty, and cost to serve. More broadly, they allow CX leaders to assess the <span>ROI for particular CX investments and directly tie CX initiatives to business outcomes <span>

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence
The LSTM input layer is defined by the input shape argument on the first hidden layer.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
section lists some final tips to help you when preparing your input data for LSTMs. The LSTM input layer must be 3D. The meaning of the 3 input dimensions are: samples, time steps and features. <span>The LSTM input layer is defined by the input shape argument on the first hidden layer. The input shape argument takes a tuple of two values that define the number of time steps and features. The number of samples is assumed to be 1 or more. The reshape() function on NumPy

Original toplevel document (pdf)

cannot see any pdfs




#feature-engineering #lstm #recurrent-neural-networks #rnn
The RNN has a multidimensional hidden state, which summarizes task-relevant information from the entire history and is updated at each timestep as well
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
the sequence. The RNN processes a sequence of input vectors (x 1 ,x 2 ,x 3 , โ€ฆ,x T ), with each vector being input into the RNN model at its corresponding timestep or position in the sequence. <span>The RNN has a multidimensional hidden state, which summarizes task-relevant information from the entire history and is updated at each timestep as well. <span>

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7668621774092

Tags
#deep-learning #keras #lstm #python #sequence
Question
Batch : A [...] through a subset of samples in the training dataset after which the network weights are updated. One epoch is comprised of one or more batches
Answer
pass

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Batch : A pass through a subset of samples in the training dataset after which the network weights are updated. One epoch is comprised of one or more batches

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7668623871244

Tags
#deep-learning #keras #lstm #python #sequence
Question
the 4 different types of sequence prediction problems: 1. Sequence Prediction. 2. Sequence Classification. 3. Sequence Generation. 4. [...] Prediction
Answer
Sequence-to-Sequence

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the 4 different types of sequence prediction problems: 1. Sequence Prediction. 2. Sequence Classification. 3. Sequence Generation. 4. Sequence-to-Sequence Prediction

Original toplevel document (pdf)

cannot see any pdfs







[unknown IMAGE 7101511240972] #has-images #recurrent-neural-networks #rnn
This property makes our model extremely flexible in dealing with diverse customer behaviors observed across multiple contexts and platforms
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
about further extensions: all individual-level, cohort-level, time-varying, or time-invariant covariates are simply encoded as categorical input variables, and are handled equally by the model. <span>This property makes our model extremely flexible in dealing with diverse customer behaviors observed across multiple contexts and platforms <span>

Original toplevel document (pdf)

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Like RNNs, the LSTMs have recurrent connections so that the state from previous activations of the neuron from the previous time step is used as context for formulating an output. But unlike other RNNs, the LSTM has a unique formulation that allows it to avoid the problems that prevent the training and scaling of other RNNs
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
> The LSTM network is different to a classical MLP. Like an MLP, the network is comprised of layers of neurons. Input data is propagated through the network in order to make a prediction. Like RNNs, the LSTMs have recurrent connections so that the state from previous activations of the neuron from the previous time step is used as context for formulating an output. But unlike other RNNs, the LSTM has a unique formulation that allows it to avoid the problems that prevent the training and scaling of other RNNs. This, and the impressive results that can be achieved, are the reason for the popularity of the technique <span>

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7668629376268

Tags
#deep-learning #keras #lstm #python #sequence
Question
[...] gradient descent with a batch size of 32 is a common configuration for LSTMs.
Answer
Mini-batch

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Mini-batch gradient descent with a batch size of 32 is a common configuration for LSTMs.

Original toplevel document (pdf)

cannot see any pdfs







#recurrent-neural-networks #rnn
Even advanced BTYD models can be too restrictive to adequately capture diverse customer behaviors in different contexts and the derived forecasts present customer future in an oftentimes too simplified way
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
bersome and an approach to account for time-varying covariates has only just recently been introduced by Bachmann, Meierer, and Nรคf (2021) at the cost of manual labeling and slower performance. <span>Even advanced BTYD models can be too restrictive to adequately capture diverse customer behaviors in different contexts and the derived forecasts present customer future in an oftentimes too simplified way <span>

Original toplevel document (pdf)

cannot see any pdfs