Edited, memorised or added to reading queue

on 07-Jul-2022 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#causality #statistics
Causal edges assumption is asymmetric; โ€œ ๐‘‹ is a cause of ๐‘Œ โ€ is not the same as saying โ€œ ๐‘Œ is a cause of ๐‘‹
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it

Original toplevel document (pdf)

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
For every agent-based model, it is necessary to define the agents and the environment of the modelled system via certain qualitative or quantitative properties and find rules or equations that govern how the agents interact with each other and with their environment. The rules should include what information the agents have access to and what action they then take considering bounded rationality [21โ€“23]. While the input the agents use for their decision is relatively easy to find and to justify, specifying the rules that lead to a decision is much more difficult and often relies on assumptions from psychology or economics, which are often hard to back up empirically or by theories. This makes the search for valid rules for agent behaviour to one of the biggest challenges in agent-based modelling
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 7102039723276

Tags
#DAG #causal #edx #has-images
[unknown IMAGE 7092564790540]
Question
Let's start by considering two extreme examples. In the first causal graph here you see that A and Y have no [...]. And therefore, any association between them will be causation. This is the setting that we expect to find in a randomized experiment.
Answer
common causes

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Let's start by considering two extreme examples. In the first causal graph here you see that A and Y have no common causes. And therefore, any association between them will be causation. This is the setting that we expect to find in a randomized experiment.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7102042344716

Tags
#causality #statistics
Question
Causal edges assumption is [...]; โ€œ ๐‘‹ is a cause of ๐‘Œ โ€ is not the same as saying โ€œ ๐‘Œ is a cause of ๐‘‹
Answer
asymmetric

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Causal edges assumption is asymmetric; โ€œ ๐‘‹ is a cause of ๐‘Œ โ€ is not the same as saying โ€œ ๐‘Œ is a cause of ๐‘‹

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7102044441868

Tags
#causality #statistics
Question

the main assumptions that we need for our causal graphical models to tell us how association and causation flow between variables are the following two:

1. [...] Assumption (Assumption 3.1)

2. Causal Edges Assumption (Assumption 3.3)

Answer
Local Markov

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the main assumptions that we need for our causal graphical models to tell us how association and causation flow between variables are the following two: 1. Local Markov Assumption (Assumption 3.1) 2. Causal Edges Assumption (Assumption 3.3)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7102046539020

Tags
#Data #GAN #reading #synthetic
Question
In generating synthesised data, normally we use the [...] granularity. For instance, order_id would represent a store managing orders, or person_id could represent a population.
Answer
finest

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In generating synthesised data, normally we use the finest granularity. For instance, order_id would represent a store managing orders, or person_id could represent a population.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7102048374028

Tags
#causality #statistics
Question
Whenever, do(๐‘ก) appears [...] the conditioning bar, it means that everything in that expression is in the post-intervention world where the intervention do(๐‘ก) occurs.
Answer
after

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Whenever, do(๐‘ก) appears after the conditioning bar, it means that everything in that expression is in the post-intervention world where the intervention do(๐‘ก) occurs.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7102051781900

Tags
#abm #agent-based #machine-learning #model #priority
Question
Traditional agent-based modelling is mostly [...]-based. For many systems, this approach is extremely successful, since the rules are well understood. However, for a large class of systems it is difficult to find rules that adequately describe the behaviour of the agents. A simple example would be two agents playing chess: Here, it is impossible to find simple rules. To solve this problem, we introduce a framework for agent-based modelling that incorporates machine learning. In a process closely related to reinforcement learning, the agents learn rules.
Answer
rule

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Traditional agent-based modelling is mostly rule-based. For many systems, this approach is extremely successful, since the rules are well understood. However, for a large class of systems it is difficult to find rules that adequately

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7102053616908

Tags
#DAG #causal #edx #inference
Question
There's no selection bias without selection. And selection is, of course, present in all studies. But for selection to cause bias under the null, it needs to be related to both [...] and outcome Y.
Answer
treatment A

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
There's no selection bias without selection. And selection is, of course, present in all studies. But for selection to cause bias under the null, it needs to be related to both treatment A and outcome Y.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7102055714060

Tags
#causality #statistics
Question

Assumption 3.1 (Local Markov Assumption)

Given its [...] in the DAG, a node ๐‘‹ is independent of all its non-descendants

Answer
parents

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Assumption 3.1 (Local Markov Assumption) Given its parents in the DAG, a node ๐‘‹ is independent of all its non-descendants

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7102057811212

Tags
#causality #statistics
Question
Whenever, do(๐‘ก) appears after the conditioning bar, it means that everything in that expression is in the post-intervention world where the intervention do(๐‘ก) occurs. For example, ๐”ผ[๐‘Œ | do(๐‘ก), ๐‘ = ๐‘ง] refers to the expected outcome in the subpopulation where [...] after the whole subpopulation has taken treatment ๐‘ก .
Answer
๐‘ = ๐‘ง

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it

Original toplevel document (pdf)

cannot see any pdfs







#abm #agent-based #machine-learning #model #priority

Universal Framework for Agent based Models - 4 phases

(1) Initialization (2) Experience(3) Training(4) Application

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Universal Framework for Agent based Models - 4 phases (1) Initialization (2) Experience(3) Training(4) Application In the first phase, Initialization, the important features of the agents and their environment need to be defined. Agents need some kind of input that can be both qualitative or quantit

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7102061743372

Tags
#abm #agent-based #has-images #machine-learning #model #priority
[unknown IMAGE 7096133094668]
Question

Universal Framework for Agent based Models - 4 phases

(1) Initialization (2) Experience(3) Training(4) Application

In the last phase, Application, the trained Neural Network is used for decision making. Agents are [...] to their original initial conditions so that the actions performed during the Experience phase have no direct influence on the Application phase. In each time step agents gather inputs and use the Artificial Neural Network for decision making. The current inputs are combined with every possible decision and the Neural Network estimates whether such a decision would be good or bad. The agent then chooses the option with the highest confidence for a positive result. This process is depicted in the lower panel of (Figure 1)

Answer
reset

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
for Agent based Models - 4 phases (1) Initialization (2) Experience(3) Training(4) Application In the last phase, Application, the trained Neural Network is used for decision making. Agents are <span>reset to their original initial conditions so that the actions performed during the Experience phase have no direct influence on the Application phase. In each time step agents gather inputs

Original toplevel document (pdf)

cannot see any pdfs







Conda is a package, dependency, and environment management tool for Anaconda Python, which is widely used in the scientific community, especially on the Windows platform where the installation of binary extensions can be difficult.

Conda helps manage Python dependencies in two primary ways:

  • Allows the creation of environments that isolate each project, thereby preventing dependency conflicts between projects.
  • Provides identification of dependency conflicts at time of package installation, thereby preventing conflicts within projects/environments.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

How to Manage Python Dependencies with Conda - ActiveState
inars Videos Case Studies Quick Reads Home ยป Resources ยป Quick Reads ยป How to Manage Python Dependencies with Conda Last Updated: September 21, 2021 How to Manage Python Dependencies with Conda <span>Conda is a package, dependency, and environment management tool for Anaconda Python, which is widely used in the scientific community, especially on the Windows platform where the installation of binary extensions can be difficult. Conda helps manage Python dependencies in two primary ways: Allows the creation of environments that isolate each project, thereby preventing dependency conflicts between projects. Provides identification of dependency conflicts at time of package installation, thereby preventing conflicts within projects/environments. How Does Conda Compare to Pip, Virtualenv, Venv & Pyenv Conda provides many of the features found in pip, virtualenv, venv and pyenv. However it is a completely separate tool that w




How Does Conda Compare to Pip, Virtualenv, Venv & Pyenv

Conda provides many of the features found in pip, virtualenv, venv and pyenv. However it is a completely separate tool that will manage Python dependencies differently, and only works in Conda environments.

Conda analyzes each package for compatible dependencies, and how to install them without conflict. If there is a conflict, Conda will let you know that the installation cannot be completed. By comparison, Pip installs all package dependencies regardless of whether they conflict with other packages already installed. To avoid dependency conflicts, use tools such as virtualenv, venv or pyenv to create isolated Anaconda environments.

For information about the use of pip in conda environments, refer to this Quickread post. How to Add Packages in Anaconda Python: Conda Vs. Pip.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

How to Manage Python Dependencies with Conda - ActiveState
eby preventing dependency conflicts between projects. Provides identification of dependency conflicts at time of package installation, thereby preventing conflicts within projects/environments. <span>How Does Conda Compare to Pip, Virtualenv, Venv & Pyenv Conda provides many of the features found in pip, virtualenv, venv and pyenv. However it is a completely separate tool that will manage Python dependencies differently, and only works in Conda environments. Conda analyzes each package for compatible dependencies, and how to install them without conflict. If there is a conflict, Conda will let you know that the installation cannot be completed. By comparison, Pip installs all package dependencies regardless of whether they conflict with other packages already installed. To avoid dependency conflicts, use tools such as virtualenv, venv or pyenv to create isolated Anaconda environments. For information about the use of pip in conda environments, refer to this Quickread post. How to Add Packages in Anaconda Python: Conda Vs. Pip. Our ActiveState Platform takes care of dependencies for you. It also helps eliminate โ€œworks on my machineโ€ issues, simplifies the Readme and lets you get to the fun coding parts faster.




Recommendations for Avoiding Dependency Conflicts with Conda

There are two simple rules to follow:

  1. Always create a new environment for each project
  2. Install all the packages that you need in the new environment at the same time. Installing packages one at a time can lead to dependency conflicts.

To create an environment with a specific version of Python and multiple packages including a package with a specific version:

$ conda create -n <env_name> python=<version#> <packagename> <packagename> <packagename>=<version#>

Alternatively, you can use conda to install all the packages in a requirements.txt file. You can save a requirements.txt file from an existing environment, or manually create a new requirements.txt for a different environment.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

How to Manage Python Dependencies with Conda - ActiveState
rmine the Current Environment with Conda The current or active environment is shown in parentheses () or brackets [] at the beginning of the Anaconda Prompt or terminal: (<current_env>) $ <span>Recommendations for Avoiding Dependency Conflicts with Conda There are two simple rules to follow: Always create a new environment for each project Install all the packages that you need in the new environment at the same time. Installing packages one at a time can lead to dependency conflicts. To create an environment with a specific version of Python and multiple packages including a package with a specific version: $ conda create -n <env_name> python=<version#> <packagename> <packagename> <packagename>=<version#> Alternatively, you can use conda to install all the packages in a requirements.txt file. You can save a requirements.txt file from an existing environment, or manually create a new requirements.txt for a different environment. To create a conda requirements.txt file from an existing environment: Activate your project environment. See section above entitled โ€œHow to Activate an Environment with Condaโ€ for detai




#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
The common approach is to make use of relatively simple agent models (for example, based on qualitative knowledge of the domain, qualitative understanding of human behavior, etc.), so that complexity arises primarily from agent interactions among themselves and with the environment. For example, Thiele et al. [40] document that only 14% of articles published in the Journal of Artificial Societies and Social Sim- ulation include parameter fitting. Our key methodological contribution is a departure from developing simple agent models based on relevant qualitative insights to learning such models entirely on data. Due to its reliance on data about individual agent behavior, our approach is not universally applicable.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
Our proposal of calibration at the agent level, in contrast, enables us to leverage state-of-the-art machine learning techniques, as well as obtain more reliable, and interpretable, models at the individual agent level. Recently, in field of ecology and sociology, there is rising interest to combine agent-based model with empirical methods [24]. Biophysical measurements, i.e., soil properties and socioeconomic surveys are used by Berger and Schreinemachers [4] to generate a landscape and agent populations which are consistent with empirical observation by Monte Carlo techniques. Notice that this is quite different application from ours, since we do not need to generate an agent population; rather we instantiate our multi-agent simulation with learned agents.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
We offer instead a framework for data-driven agent-based modeling (DDABM), where agent models are learned from data about individual (typically, human) behavior, and the agent-based model is thereby fully data-driven, with no additional parameters to govern its behavior.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
We now present our general framework for data-driven agent-based modeling (DDABM), which we subsequently apply to the problem of modeling residential rooftop solar diffusion in San Diego county, California. The key features of this framework are: a) explicit division of data into โ€œcalibrationโ€ and โ€œvalidationโ€ to ensure sound and reliable model validation and b) automated agent model training and cross-validation. In this framework, we make three assumptions. The first is that time is discrete. While this assumption is not of fundamental importance, it will help in presenting the concepts, and is the assumption made in our application. The second assumption is that agents are homogeneous. This may seem a strong assumption, but in fact it is without loss of generality. To see this, suppose that h(x) is our model of agent behaviour, where x is state, or all information that conditions the agentโ€™s decision. Heterogeneity can be embedded in h by considering individual characteristics in state x, such as personality traits and socio-economic status, or, as in our application domain, housing characteristics.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
Our third assumption is that each individual makes independent decisions at each time t, conditional on state x. Again, if x includes all features relevant to an agentโ€™s decision, this assumption is relatively innocuous
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
Given these assumptions, DDABM proceeds as follows. We start with a data set of individual agent behavior over time, D = {(x it , y it )} i,t=0,...,T , where i indexes agents, t time through some horizon T and y it indicates agent iโ€™s decision, i.e., 1 for โ€œadoptedโ€ and 0 for โ€œdid not adoptโ€ at time t. 1. Split the data D into calibration D c and validation D v parts along the time dimension: D c = {(x it , y it )} i,tโ‰คT c and D v = {(x it , y it )} i,t>T c where T c is a time threshold. 2. Learn a model of agent behavior h on D c . Use cross-validation on D c for model (e.g., feature) selection. 3. Instantiate agents in the ABM using h learned in step 2. 4. Initialize the ABM to state x jT c for all artificial agents j. 5. Validate the ABM by running it from x T c using D v . One may wonder how to choose the initial state x jT c for the artificial agents. This is direct if the artificial agents in the ABM correspond to actual agents in the data. For example, in rooftop solar adoption we know which agents have adopted solar at time T c , and their actual housing characteristics, etc. Alternatively, one can run the ABM from the initial state, and start validation upon reaching time T c +
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Test
Developing Physically | i Cognitive Neuroscience The Biology of the Mind MICHAEL S. GAZZANIGA University of California, Santa Barbara RICHARD B.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs