Edited, memorised or added to reading queue

on 21-Jun-2022 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7093186596108

Tags
#DAG #causal #edx #has-images
[unknown IMAGE 7093187120396]
[unknown IMAGE 7093175586060]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 7096076995852

Tags
#DAG #causal #edx #has-images #inference
[unknown IMAGE 7096077520140]
[unknown IMAGE 7096075685132]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 7096080403724

Tags
#DAG #causal #edx #has-images #inference
[unknown IMAGE 7096080928012]
[unknown IMAGE 7096075685132]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 7096084335884

Tags
#DAG #causal #edx #has-images #inference
[unknown IMAGE 7096084860172]
[unknown IMAGE 7096075685132]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







#DAG #causal #edx #inference
There's no selection bias without selection. And selection is, of course, present in all studies. But for selection to cause bias under the null, it needs to be related to both treatment A and outcome Y.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 7096090103052

Tags
#causality #statistics
Question
SUTVA is a combination of [...] and no interference (and also deterministic potential outcomes)
Answer
consistency

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
SUTVA is a combination of consistency and no interference (and also deterministic potential outcomes)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7096091938060

Tags
#causality #statistics
Question
Causal graphs are special in that we additionally assume that the edges have causal meaning (causal edges assumption, Assumption 3.3). This assumption is what introduces causality into our models, and it makes one type of path take on a whole new meaning: [...] paths.
Answer
directed

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the edges have causal meaning (causal edges assumption, Assumption 3.3). This assumption is what introduces causality into our models, and it makes one type of path take on a whole new meaning: <span>directed paths. <span>

Original toplevel document (pdf)

cannot see any pdfs







#DAG #causal #edx
the most important take-home message: we need expert knowledge to determine if we should adjust for a variable. The statistical criteria are insufficient to characterize confounding and confounders.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
the most important take-home message: we need expert knowledge to determine if we should adjust for a variable. The statistical criteria are insufficient to characterize confounding and confounders. Of course, in many cases we don't have enough expert knowledge to draw the true causal DAG that represents a causal structure of treatment A, outcome Y, and potential confounder L. In t

Original toplevel document (pdf)

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
Traditional agent-based modelling is mostly rule-based. For many systems, this approach is extremely successful, since the rules are well understood. However, for a large class of systems it is difficult to find rules that adequately describe the behaviour of the agents. A simple example would be two agents playing chess: Here, it is impossible to find simple rules. To solve this problem, we introduce a framework for agent-based modelling that incorporates machine learning. In a process closely related to reinforcement learning, the agents learn rules.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
The main idea of agent-based modelling is to simulate a system not from the top down, i.e. starting from the whole system and rules and equations that govern it, but from the bottom up, i.e. with the individual components (agents) that comprise the system as a starting point. In many systems this approach has huge benefits. While a top-down approach needs complete understanding of all the processes that lead to the dynamic of the system, like feedback, synergies and nonlinear effects, a bottom-up approach only needs understanding of the rules or equations that govern the individual behaviour of each agent. Other effects can then emerge due to interactions between agents [14].
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
Especially in areas, where the agents represent human beings or entities influenced by human behaviour and decision making (corporations, governments), agent-based modelling is a promising technique that is getting more and more popular recently
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
For every agent-based model, it is necessary to define the agents and the environment of the modelled system via certain qualitative or quantitative properties and find rules or equations that govern how the agents interact with each other and with their environment. The rules should include what information the agents have access to and what action they then take considering bounded rationality [21–23]. While the input the agents use for their decision is relatively easy to find and to justify, specifying the rules that lead to a decision is much more difficult and often relies on assumptions from psychology or economics, which are often hard to back up empirically or by theories. This makes the search for valid rules for agent behaviour to one of the biggest challenges in agent-based modelling
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
This makes the search for valid rules for agent behaviour to one of the biggest challenges in agent-based modelling. To solve this problem of rules that are unknown to us, using a form of machine learning to find them is intuitive. This idea was explored in [24], where a framework for agent-based modelling was presented and used to replicate Schelling’s prominent segregation model [25]. The main idea of the framework is closely related to reinforcement learning [26], in the sense that agents learn how to behave to optimize their score or utility function. However, the goal is completely different. While reinforcement learning tries to find optimal solutions and provides the Neural Network with as much information as possible, the presented framework limits the available information to things the agents can actually perceive and also allows for non-optimal decisions. The goal is to emulate a realistic decision process, not find an optimal solution
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
Neural networks can also be used for nonlinear adaptive control in multi-agent systems
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority

The goal of the presented framework is to provide a universal technique for agent-based models, in which the decision making process of the agents is not determined by theory-driven or empirically found rules, but rather by an Artificial Neural Network. The process itself can be separated into four phases:

(1) Initialization

(2) Experience

(3) Training

(4) Application

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority

Universal Framework for Agent based Models - 4 phases

(1) Initialization (2) Experience(3) Training(4) Application

In the first phase, Initialization, the important features of the agents and their environment need to be defined. Agents need some kind of input that can be both qualitative or quantitative. In the simplest case this is sensory input or general knowledge, but also other inputs that influence decision making are thinkable, like memory or individual preferences. One also needs to define the decision an agent needs to make in each time step. Most generally this will be the choice between several possible actions. Each agent also needs a target or a goal, mathematically expressed as a score or utility function that the agent wants to maximize. For simple economic systems, profit is a score that can easily be quantified and used as a target, but many other goals can be used, possibly including fairness [45,46] and social preferences [47,48]. Depending on the system, completely different properties can be used as goals (e.g. minimization of travel time for traffic systems) and they could be different for each agent. Once the system, the agents, the input for the agents, the decision and the goal of each agent is defined, the Initialization phase is finished. Note, that defining a utility function is in most cases easier than finding a rule set that leads to optimizing this function. Think about a game of chess: The utility function is easy to define (1 for a win, 0 otherwise), but finding a set of rules that gets the position of all pieces as input and a realistic move (probably even related to player skill) is nearly impossible. In that sense, the Initialization phase is relatively simple, when compared to traditional agent-based models.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
In the second phase, Experience, agents make random decisions to collect information in a database that can then be used to train the Artificial Neural Network. In every time step agents first observe their environment and store all the relevant data in the database. They also calculate their current score. Next, they make a random decision and recalcu- late their score once more after performing the action they chose. The result of this decision is then rated as positive, if the score increased, negative, if the score decreased, or neutral, if there was no or minimal change in score. The complete data set of one experience thus includes the inputs, the decision that was chosen and the result of this decision. The reason, why the result of the decision is only stored qualitatively, is because we do not assume that the agents have quantitative knowledge about their own utility. They only sense if their situation improved or not, but could not give an accurate number to quantify it. The whole process is depicted in the upper panel of (Figure 1). In order to generate a sufficient pool of information, many time steps are necessary, in which agents should encounter and rate a huge amount of combinations of input, decision and result. Note, that in this phase the input has no influence on the decision making of agents, but is only stored in the database to enable the training of the Artificial Neural Network in the next phase
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority

Universal Framework for Agent based Models - 4 phases

(1) Initialization (2) Experience(3) Training(4) Application

In the Training phase, the Artificial Neural Network is trained to solve the following classification problem: The Neural Network is presented with the input of the agent and

a decision and should estimate whether this is a good, a neutral or a bad decision. Various methods could be used for this type of problem. In order to keep the framework versatile, a hidden layer approach [49,50] is used here. After training the Artificial Neural Network, an unused part of the database gathered in phase two is used for cross validation [51]. The Artificial Neural Network is implemented using scikit-learn [52

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 7096133094668] #abm #agent-based #has-images #machine-learning #model #priority

Universal Framework for Agent based Models - 4 phases

(1) Initialization (2) Experience(3) Training(4) Application

In the last phase, Application, the trained Neural Network is used for decision making. Agents are reset to their original initial conditions so that the actions performed during the Experience phase have no direct influence on the Application phase. In each time step agents gather inputs and use the Artificial Neural Network for decision making. The current inputs are combined with every possible decision and the Neural Network estimates whether such a decision would be good or bad. The agent then chooses the option with the highest confidence for a positive result. This process is depicted in the lower panel of (Figure 1)

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
Compared to the conventional approach to agent-based modelling, using this framework has various advantages. First and foremost, the most difficult task in developing an agent-based model, namely the definition of the rules and equations governing agent behaviour is translated to the definition of the goals of each agent and which parts of the system they can observe. The connection between input and decision is then handled objectively by an Artificial Neural Network. This also means that the model is highly adaptive. If the goals of the agents, their input or properties of the system change, retraining the Neural Network is the only adaptation that is necessary. Section 3 showcases this flexibility with different examples. In addition to the framework’s flexibility and objectivity, it also enables an intuitive way to include bounded rationality in a model. Agents always decide on what they think has the highest chances of being a good decision.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 7096137551116] #abm #agent-based #has-images #machine-learning #model #priority
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority
If they (agents) have incomplete or wrong information, we do not need to find special rules that would rely heavily on assumptions, but are still able to use the same process, i.e. an Artificial Neural Network that is just trained differently or uses incomplete/wrong input. In that case, the Neural Network acts as more than just a method to classify possible decisions into good, bad and neutral decisions: it models the decision process of an agent realistically, in the sense that, depending on the gathered experience, the choice might not be optimal all the time.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority #synergistic-integration
Abstract— Agent-based modeling (ABM) involves developing models in which agents make adaptive decisions in a changing environment. Machine-learning (ML) based inference models can improve sequential decision-making by learning agents’ behavioural patterns. With the aid of ML, this emerging area can extend traditional agent-based schemes that hardcode agents’ behavioral rules into an adaptive model. Even though there are plenty of studies that apply ML in ABMs, the generalized applicable scenarios, frameworks, and procedures for implementations are not well addressed. In this article, we provide a comprehensive review of applying ML in ABM based on four major scenarios, i.e., microagent-level situational awareness learning, microagent-level behavior intervention, macro-ABM-level emulator, and sequential decision-making. For these four scenarios, the related algorithms, frameworks, procedures of implementations, and multidisciplinary applications are thoroughly investigated. We also discuss how ML can improve prediction in ABMs by trading off the variance and bias and how ML can improve the sequential decision-making of microagent and macrolevel policymakers via a mechanism of reinforced behavioural intervention. At the end of this article, future perspectives of applying ML in ABMs are discussed with respect to data acquisition and quality issues, the possible solution of solving the convergence problem of reinforcement learning, interpretable ML applications, and bounded rationality of ABM. Index Terms— Agent-based modeling (ABM), behavioral intervention, machine learning (ML), reinforcement learning (RL).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#abm #agent-based #machine-learning #model #priority #synergistic-integration
Modeling skills can be applied to systems with different complexities, ranging from a simple system, like a circuit with a resistor (Ohm’s Law), to complex systems, such as global economic system. Most real-life systems involve an extended range of spatial or temporal scales, as well as interactions between the built environment and natural system, which can be very complex
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs