# on 07-Jul-2022 (Thu)

#### Annotation 7095736470796

 #causality #statistics Causal edges assumption is asymmetric; “ 𝑋 is a cause of 𝑌 ” is not the same as saying “ 𝑌 is a cause of 𝑋

Open it

#### Original toplevel document (pdf)

cannot see any pdfs

#### Annotation 7096103996684

 #abm #agent-based #machine-learning #model #priority For every agent-based model, it is necessary to define the agents and the environment of the modelled system via certain qualitative or quantitative properties and find rules or equations that govern how the agents interact with each other and with their environment. The rules should include what information the agents have access to and what action they then take considering bounded rationality [21–23]. While the input the agents use for their decision is relatively easy to find and to justify, specifying the rules that lead to a decision is much more difficult and often relies on assumptions from psychology or economics, which are often hard to back up empirically or by theories. This makes the search for valid rules for agent behaviour to one of the biggest challenges in agent-based modelling

#### pdf

cannot see any pdfs

#### Flashcard 7102039723276

Tags
#DAG #causal #edx #has-images
[unknown IMAGE 7092564790540]
Question
Let's start by considering two extreme examples. In the first causal graph here you see that A and Y have no [...]. And therefore, any association between them will be causation. This is the setting that we expect to find in a randomized experiment.
common causes

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Let's start by considering two extreme examples. In the first causal graph here you see that A and Y have no common causes. And therefore, any association between them will be causation. This is the setting that we expect to find in a randomized experiment.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102042344716

Tags
#causality #statistics
Question
Causal edges assumption is [...]; “ 𝑋 is a cause of 𝑌 ” is not the same as saying “ 𝑌 is a cause of 𝑋
asymmetric

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Causal edges assumption is asymmetric; “ 𝑋 is a cause of 𝑌 ” is not the same as saying “ 𝑌 is a cause of 𝑋

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102044441868

Tags
#causality #statistics
Question

the main assumptions that we need for our causal graphical models to tell us how association and causation flow between variables are the following two:

1. [...] Assumption (Assumption 3.1)

2. Causal Edges Assumption (Assumption 3.3)

Local Markov

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
the main assumptions that we need for our causal graphical models to tell us how association and causation flow between variables are the following two: 1. Local Markov Assumption (Assumption 3.1) 2. Causal Edges Assumption (Assumption 3.3)

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102046539020

Tags
Question
In generating synthesised data, normally we use the [...] granularity. For instance, order_id would represent a store managing orders, or person_id could represent a population.
finest

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In generating synthesised data, normally we use the finest granularity. For instance, order_id would represent a store managing orders, or person_id could represent a population.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102048374028

Tags
#causality #statistics
Question
Whenever, do(𝑡) appears [...] the conditioning bar, it means that everything in that expression is in the post-intervention world where the intervention do(𝑡) occurs.
after

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Whenever, do(𝑡) appears after the conditioning bar, it means that everything in that expression is in the post-intervention world where the intervention do(𝑡) occurs.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102051781900

Tags
#abm #agent-based #machine-learning #model #priority
Question
Traditional agent-based modelling is mostly [...]-based. For many systems, this approach is extremely successful, since the rules are well understood. However, for a large class of systems it is difficult to find rules that adequately describe the behaviour of the agents. A simple example would be two agents playing chess: Here, it is impossible to find simple rules. To solve this problem, we introduce a framework for agent-based modelling that incorporates machine learning. In a process closely related to reinforcement learning, the agents learn rules.
rule

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Traditional agent-based modelling is mostly rule-based. For many systems, this approach is extremely successful, since the rules are well understood. However, for a large class of systems it is difficult to find rules that adequately

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102053616908

Tags
#DAG #causal #edx #inference
Question
There's no selection bias without selection. And selection is, of course, present in all studies. But for selection to cause bias under the null, it needs to be related to both [...] and outcome Y.
treatment A

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
There's no selection bias without selection. And selection is, of course, present in all studies. But for selection to cause bias under the null, it needs to be related to both treatment A and outcome Y.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102055714060

Tags
#causality #statistics
Question

Assumption 3.1 (Local Markov Assumption)

Given its [...] in the DAG, a node 𝑋 is independent of all its non-descendants

parents

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Assumption 3.1 (Local Markov Assumption) Given its parents in the DAG, a node 𝑋 is independent of all its non-descendants

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102057811212

Tags
#causality #statistics
Question
Whenever, do(𝑡) appears after the conditioning bar, it means that everything in that expression is in the post-intervention world where the intervention do(𝑡) occurs. For example, 𝔼[𝑌 | do(𝑡), 𝑍 = 𝑧] refers to the expected outcome in the subpopulation where [...] after the whole subpopulation has taken treatment 𝑡 .
𝑍 = 𝑧

status measured difficulty not learned 37% [default] 0

Open it

#### Original toplevel document (pdf)

cannot see any pdfs

#### Annotation 7102059646220

 #abm #agent-based #machine-learning #model #priority Universal Framework for Agent based Models - 4 phases (1) Initialization (2) Experience(3) Training(4) Application

#### Parent (intermediate) annotation

Open it
Universal Framework for Agent based Models - 4 phases (1) Initialization (2) Experience(3) Training(4) Application In the first phase, Initialization, the important features of the agents and their environment need to be defined. Agents need some kind of input that can be both qualitative or quantit

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 7102061743372

Tags
#abm #agent-based #has-images #machine-learning #model #priority
[unknown IMAGE 7096133094668]
Question

Universal Framework for Agent based Models - 4 phases

(1) Initialization (2) Experience(3) Training(4) Application

In the last phase, Application, the trained Neural Network is used for decision making. Agents are [...] to their original initial conditions so that the actions performed during the Experience phase have no direct influence on the Application phase. In each time step agents gather inputs and use the Artificial Neural Network for decision making. The current inputs are combined with every possible decision and the Neural Network estimates whether such a decision would be good or bad. The agent then chooses the option with the highest confidence for a positive result. This process is depicted in the lower panel of (Figure 1)

reset

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
for Agent based Models - 4 phases (1) Initialization (2) Experience(3) Training(4) Application In the last phase, Application, the trained Neural Network is used for decision making. Agents are <span>reset to their original initial conditions so that the actions performed during the Experience phase have no direct influence on the Application phase. In each time step agents gather inputs

#### Original toplevel document (pdf)

cannot see any pdfs

#### Annotation 7102068559116

 Conda is a package, dependency, and environment management tool for Anaconda Python, which is widely used in the scientific community, especially on the Windows platform where the installation of binary extensions can be difficult. Conda helps manage Python dependencies in two primary ways: Allows the creation of environments that isolate each project, thereby preventing dependency conflicts between projects. Provides identification of dependency conflicts at time of package installation, thereby preventing conflicts within projects/environments.

How to Manage Python Dependencies with Conda - ActiveState
inars Videos Case Studies Quick Reads Home » Resources » Quick Reads » How to Manage Python Dependencies with Conda Last Updated: September 21, 2021 How to Manage Python Dependencies with Conda <span>Conda is a package, dependency, and environment management tool for Anaconda Python, which is widely used in the scientific community, especially on the Windows platform where the installation of binary extensions can be difficult. Conda helps manage Python dependencies in two primary ways: Allows the creation of environments that isolate each project, thereby preventing dependency conflicts between projects. Provides identification of dependency conflicts at time of package installation, thereby preventing conflicts within projects/environments. How Does Conda Compare to Pip, Virtualenv, Venv & Pyenv Conda provides many of the features found in pip, virtualenv, venv and pyenv. However it is a completely separate tool that w

#### Annotation 7102070656268

 How Does Conda Compare to Pip, Virtualenv, Venv & Pyenv Conda provides many of the features found in pip, virtualenv, venv and pyenv. However it is a completely separate tool that will manage Python dependencies differently, and only works in Conda environments. Conda analyzes each package for compatible dependencies, and how to install them without conflict. If there is a conflict, Conda will let you know that the installation cannot be completed. By comparison, Pip installs all package dependencies regardless of whether they conflict with other packages already installed. To avoid dependency conflicts, use tools such as virtualenv, venv or pyenv to create isolated Anaconda environments. For information about the use of pip in conda environments, refer to this Quickread post. How to Add Packages in Anaconda Python: Conda Vs. Pip.

How to Manage Python Dependencies with Conda - ActiveState
eby preventing dependency conflicts between projects. Provides identification of dependency conflicts at time of package installation, thereby preventing conflicts within projects/environments. <span>How Does Conda Compare to Pip, Virtualenv, Venv & Pyenv Conda provides many of the features found in pip, virtualenv, venv and pyenv. However it is a completely separate tool that will manage Python dependencies differently, and only works in Conda environments. Conda analyzes each package for compatible dependencies, and how to install them without conflict. If there is a conflict, Conda will let you know that the installation cannot be completed. By comparison, Pip installs all package dependencies regardless of whether they conflict with other packages already installed. To avoid dependency conflicts, use tools such as virtualenv, venv or pyenv to create isolated Anaconda environments. For information about the use of pip in conda environments, refer to this Quickread post. How to Add Packages in Anaconda Python: Conda Vs. Pip. Our ActiveState Platform takes care of dependencies for you. It also helps eliminate “works on my machine” issues, simplifies the Readme and lets you get to the fun coding parts faster.

#### Annotation 7102072229132

 Recommendations for Avoiding Dependency Conflicts with Conda There are two simple rules to follow: Always create a new environment for each project Install all the packages that you need in the new environment at the same time. Installing packages one at a time can lead to dependency conflicts. To create an environment with a specific version of Python and multiple packages including a package with a specific version: $conda create -n python= = Alternatively, you can use conda to install all the packages in a requirements.txt file. You can save a requirements.txt file from an existing environment, or manually create a new requirements.txt for a different environment. status not read How to Manage Python Dependencies with Conda - ActiveState rmine the Current Environment with Conda The current or active environment is shown in parentheses () or brackets [] at the beginning of the Anaconda Prompt or terminal: (<current_env>)$ <span>Recommendations for Avoiding Dependency Conflicts with Conda There are two simple rules to follow: Always create a new environment for each project Install all the packages that you need in the new environment at the same time. Installing packages one at a time can lead to dependency conflicts. To create an environment with a specific version of Python and multiple packages including a package with a specific version: \$ conda create -n <env_name> python=<version#> <packagename> <packagename> <packagename>=<version#> Alternatively, you can use conda to install all the packages in a requirements.txt file. You can save a requirements.txt file from an existing environment, or manually create a new requirements.txt for a different environment. To create a conda requirements.txt file from an existing environment: Activate your project environment. See section above entitled “How to Activate an Environment with Conda” for detai

#### Annotation 7102073801996

 #abm #agent-based #priority #rooftop-solar #simulation #synthetic-data The common approach is to make use of relatively simple agent models (for example, based on qualitative knowledge of the domain, qualitative understanding of human behavior, etc.), so that complexity arises primarily from agent interactions among themselves and with the environment. For example, Thiele et al. [40] document that only 14% of articles published in the Journal of Artificial Societies and Social Sim- ulation include parameter fitting. Our key methodological contribution is a departure from developing simple agent models based on relevant qualitative insights to learning such models entirely on data. Due to its reliance on data about individual agent behavior, our approach is not universally applicable.

#### pdf

cannot see any pdfs

#### Annotation 7102075374860

 #abm #agent-based #priority #rooftop-solar #simulation #synthetic-data Our proposal of calibration at the agent level, in contrast, enables us to leverage state-of-the-art machine learning techniques, as well as obtain more reliable, and interpretable, models at the individual agent level. Recently, in field of ecology and sociology, there is rising interest to combine agent-based model with empirical methods [24]. Biophysical measurements, i.e., soil properties and socioeconomic surveys are used by Berger and Schreinemachers [4] to generate a landscape and agent populations which are consistent with empirical observation by Monte Carlo techniques. Notice that this is quite different application from ours, since we do not need to generate an agent population; rather we instantiate our multi-agent simulation with learned agents.

#### pdf

cannot see any pdfs

#### Annotation 7102076947724

 #abm #agent-based #priority #rooftop-solar #simulation #synthetic-data We offer instead a framework for data-driven agent-based modeling (DDABM), where agent models are learned from data about individual (typically, human) behavior, and the agent-based model is thereby fully data-driven, with no additional parameters to govern its behavior.

#### pdf

cannot see any pdfs

#### Annotation 7102079307020

 #abm #agent-based #priority #rooftop-solar #simulation #synthetic-data We now present our general framework for data-driven agent-based modeling (DDABM), which we subsequently apply to the problem of modeling residential rooftop solar diffusion in San Diego county, California. The key features of this framework are: a) explicit division of data into “calibration” and “validation” to ensure sound and reliable model validation and b) automated agent model training and cross-validation. In this framework, we make three assumptions. The first is that time is discrete. While this assumption is not of fundamental importance, it will help in presenting the concepts, and is the assumption made in our application. The second assumption is that agents are homogeneous. This may seem a strong assumption, but in fact it is without loss of generality. To see this, suppose that h(x) is our model of agent behaviour, where x is state, or all information that conditions the agent’s decision. Heterogeneity can be embedded in h by considering individual characteristics in state x, such as personality traits and socio-economic status, or, as in our application domain, housing characteristics.

#### pdf

cannot see any pdfs

#### Annotation 7102081666316

 #abm #agent-based #priority #rooftop-solar #simulation #synthetic-data Our third assumption is that each individual makes independent decisions at each time t, conditional on state x. Again, if x includes all features relevant to an agent’s decision, this assumption is relatively innocuous

#### pdf

cannot see any pdfs

#### Annotation 7102083239180

 #abm #agent-based #priority #rooftop-solar #simulation #synthetic-data Given these assumptions, DDABM proceeds as follows. We start with a data set of individual agent behavior over time, D = {(x it , y it )} i,t=0,...,T , where i indexes agents, t time through some horizon T and y it indicates agent i’s decision, i.e., 1 for “adopted” and 0 for “did not adopt” at time t. 1. Split the data D into calibration D c and validation D v parts along the time dimension: D c = {(x it , y it )} i,t≤T c and D v = {(x it , y it )} i,t>T c where T c is a time threshold. 2. Learn a model of agent behavior h on D c . Use cross-validation on D c for model (e.g., feature) selection. 3. Instantiate agents in the ABM using h learned in step 2. 4. Initialize the ABM to state x jT c for all artificial agents j. 5. Validate the ABM by running it from x T c using D v . One may wonder how to choose the initial state x jT c for the artificial agents. This is direct if the artificial agents in the ABM correspond to actual agents in the data. For example, in rooftop solar adoption we know which agents have adopted solar at time T c , and their actual housing characteristics, etc. Alternatively, one can run the ABM from the initial state, and start validation upon reaching time T c +

#### pdf

cannot see any pdfs

#### Annotation 7102432152844

 Test Developing Physically | i Cognitive Neuroscience The Biology of the Mind MICHAEL S. GAZZANIGA University of California, Santa Barbara RICHARD B.