Edited, memorised or added to reading queue

on 20-Jan-2026 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Conda tips:

Install all the packages that you need in the new environment at the same time. Installing packages one at a time can lead to dependency conflicts.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Recommendations for Avoiding Dependency Conflicts with Conda There are two simple rules to follow: Always create a new environment for each project Install all the packages that you need in the new environment at the same time. Installing packages one at a time can lead to dependency conflicts. To create an environment with a specific version of Python and multiple packages including a package with a specific version: $ conda create -n <env_name> python=<version#>

Original toplevel document

How to Manage Python Dependencies with Conda - ActiveState
rmine the Current Environment with Conda The current or active environment is shown in parentheses () or brackets [] at the beginning of the Anaconda Prompt or terminal: (<current_env>) $ <span>Recommendations for Avoiding Dependency Conflicts with Conda There are two simple rules to follow: Always create a new environment for each project Install all the packages that you need in the new environment at the same time. Installing packages one at a time can lead to dependency conflicts. To create an environment with a specific version of Python and multiple packages including a package with a specific version: $ conda create -n <env_name> python=<version#> <packagename> <packagename> <packagename>=<version#> Alternatively, you can use conda to install all the packages in a requirements.txt file. You can save a requirements.txt file from an existing environment, or manually create a new requirements.txt for a different environment. To create a conda requirements.txt file from an existing environment: Activate your project environment. See section above entitled “How to Activate an Environment with Conda” for detai




#abm #agent-based #machine-learning #model #priority
We demonstrated the advantages of this approach by applying it to reproduce the results of the prominent Sugarscape model. To show the flexibility of the framework, we then made slight changes to the modelled system by removing the competition between the agents. While a traditional approach to agent-based modelling would require a reformulation of the rules for agent behaviour, here the Neural Network is automatically retrained to accommodate the changes in the system and we naturally end up with realistic agent behaviour
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
negative or neutral. Here, the Neural Network is not used as a form of optimization, but rather as a realistic depiction of a decision process, including the possibility of errors in judgement. <span>We demonstrated the advantages of this approach by applying it to reproduce the results of the prominent Sugarscape model. To show the flexibility of the framework, we then made slight changes to the modelled system by removing the competition between the agents. While a traditional approach to agent-based modelling would require a reformulation of the rules for agent behaviour, here the Neural Network is automati- cally retrained to accommodate the changes in the system and we naturally end up with realistic agent behaviour. We also explored the limits of the framework and found that the original approach fails, once system states that are relevant, if agents act to reach a goal, do not appear during an Ex

Original toplevel document (pdf)

cannot see any pdfs




#RNN #ariadne #behaviour #consumer #deep-learning #priority #recurrent-neural-networks #retail #simulation #synthetic-data

In the context of behaviour prediction, we want to understand how previous consumer actions influence model predictions:

- How does order probability change when products are put into the cart?

- Does it decrease significantly if a consumer does not return to a webshop for two days?

.

Answers to these questions are consumer-specific; they depend on the complete consumer history

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
As machine learning models become ubiquitous in our everyday lives, demand for explaining their predictions is growing [5, 16, 14]. In the context of behaviour prediction, we want to understand how previous consumer actions influence model predictions: How does order probability change when products are put into the cart? Does it decrease significantly if a consumer does not return to a webshop for two days? Answers to these questions are consumer-specific; they depend on the complete consumer history

Original toplevel document (pdf)

cannot see any pdfs




#causal #inference
Inverse probability matching (a.k.a propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Open it
The G-methods family, as developed by James Robins and colleagues, specifically includes: Inverse probability weighting (IPW) G-computation G-estimation of structural nested models Inverse probability matching (propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach




Flashcard 7788706008332

Tags
#recurrent-neural-networks #rnn
Question
customer defection in non-contractual business settings is by definition unobserved by the firm and thus needs to be [...] inferred from past transaction
Answer
indirectly

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
customer defection in non-contractual business settings is by definition unobserved by the firm and thus needs to be indirectly inferred from past transaction

Original toplevel document (pdf)

cannot see any pdfs







#deep-learning #keras #lstm #python #sequence
Stacked LSTMs are now a stable technique for challenging sequence prediction problems
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
they found that the depth of the network was more important than the number of memory cells in a given layer to model skill. Stacked LSTMs are now a stable technique for challenging sequence prediction problems. A Stacked LSTM architecture can be defined as an LSTM model comprised of multiple LSTM layers

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7788710726924

Tags
#ggplot2
Question
The problem here is that by default scales::percent() multiplies its input value by 100. This can be controlled by the [...] parameter.
Answer
scale

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The problem here is that by default scales::percent() multiplies its input value by 100. This can be controlled by the scale parameter.

Original toplevel document

Open it
Something is not right here! 4000%!? That seems a bit excessive. The problem here is that by default scales::percent() multiplies its input value by 100. This can be controlled by the scale parameter. scales::percent(100, scale = 1) Copy ## [1] "100%" However, scale_y_continuous() expects a function as input for its labels parameter not the actual labels itself. Thus, using percent() is not an option anymore. Fortu







#recurrent-neural-networks #rnn
In subscription-based or contractual settings customer ‘‘churn” events are directly observable,
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Contrary to subscription-based or contractual settings where customer ‘‘churn” events are directly observable, customer defection in non-contractual business settings is by definition unobserved by the firm and thus needs to be indirectly inferred from past transaction behavior (Reinartz & K

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7788717542668

Tags
#causal #inference
Question
Inverse probability matching (propensity score matching) is not technically a [...]. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach
Answer
G-method

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Inverse probability matching (propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach

Original toplevel document

Open it
The G-methods family, as developed by James Robins and colleagues, specifically includes: Inverse probability weighting (IPW) G-computation G-estimation of structural nested models Inverse probability matching (propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach







Flashcard 7788719115532

Tags
#causal #inference
Question
Inverse probability [...] is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach
Answer
matching

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Inverse probability matching (propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framewo

Original toplevel document

Open it
The G-methods family, as developed by James Robins and colleagues, specifically includes: Inverse probability weighting (IPW) G-computation G-estimation of structural nested models Inverse probability matching (propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach







Flashcard 7788721736972

Tags
#causal #inference
Question
Inverse probability matching (a.k.a [...] matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach
Answer
propensity score

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Inverse probability matching (a.k.a propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation

Original toplevel document

Open it
The G-methods family, as developed by James Robins and colleagues, specifically includes: Inverse probability weighting (IPW) G-computation G-estimation of structural nested models Inverse probability matching (propensity score matching) is not technically a G-method. While it shares the fundamental goal of addressing confounding and selection bias, it employs a different mathematical framework and estimation approach







Flashcard 7788724620556

Tags
#deep-learning #keras #lstm #python #sequence
Question
Unfortunately, the range of contextual information that standard [...]s can access is in practice quite limited. The problem is that the influence of a given input on the hidden layer, and therefore on the network output, either decays or blows up exponentially as it cycles around the network’s recurrent connections.
Answer
RNN

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Unfortunately, the range of contextual information that standard RNNs can access is in practice quite limited. The problem is that the influence of a given input on the hidden layer, and therefore on the network output, either decays or blows up exponent

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7788726455564

Tags
#recurrent-neural-networks #rnn
Question
Toth, Tan, Di Fabbrizio, and Datta (2017) have shown that a mixture of RNNs can approximate several complex functions [...].
Answer
simultaneously

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Toth, Tan, Di Fabbrizio, and Datta (2017) have shown that a mixture of RNNs can approximate several complex functions simultaneously.

Original toplevel document (pdf)

cannot see any pdfs