Edited, memorised or added to reading queue

on 16-Sep-2025 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#abm #agent-based #priority #rooftop-solar #simulation #synthetic-data
The common approach is to make use of relatively simple agent models (for example, based on qualitative knowledge of the domain, qualitative understanding of human behavior, etc.), so that complexity arises primarily from agent interactions among themselves and with the environment. For example, Thiele et al. [40] document that only 14% of articles published in the Journal of Artificial Societies and Social Simulation include parameter fitting. Our key methodological contribution is a departure from developing simple agent models based on relevant qualitative insights to learning such models entirely on data. Due to its reliance on data about individual agent behavior, our approach is not universally applicable.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence

Below are some common configurations for the batch size:

batch size=1 :
Weights are updated after each sample and the procedure is called stochastic gradient descent.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Below are some common configurations for the batch size: batch size=1 : Weights are updated after each sample and the procedure is called stochas- tic gradient descent. batch size=32 : weights are updated after a specified number of samples and the procedure is called mini-batch gradient descent. Common values are 32, 64, and 128, tailored to the desir

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7757342838028

Tags
#deep-learning #keras #lstm #python #sequence
Question

Below are some common configurations for the batch size:

batch size=[...] :
Weights are updated after each sample and the procedure is called stochastic gradient descent.

Answer
1

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Below are some common configurations for the batch size: batch size=1 : Weights are updated after each sample and the procedure is called stochastic gradient descent.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7757344935180

Tags
#deep-learning #keras #lstm #python #sequence
Question
Standardizing a dataset involves rescaling the distribution of values so that [...] of observed values is 0 and the standard deviation is 1
Answer
the mean

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Standardizing a dataset involves rescaling the distribution of values so that the mean of observed values is 0 and the standard deviation is 1

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7757347818764

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
The HMM has N discrete hidden states (where N is typically [...]) and, therefore, has only log 2 (N) bits of information available to capture the sequence history (Brown & Hinton, 2001)
Answer
small

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The HMM has N discrete hidden states (where N is typically small) and, therefore, has only log 2 (N) bits of information available to capture the sequence history (Brown & Hinton, 2001)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7757349915916

Tags
#deep-learning #keras #lstm #python #sequence
Question
Increasing the depth of the network provides an alternate solution that requires [...] and trains faster
Answer
fewer neurons

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Increasing the depth of the network provides an alternate solution that requires fewer neurons and trains faster

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7757351488780

Tags
#deep-learning #keras #lstm #python #sequence
Question
Long Short-Term Memory (LSTM) is an [...] architecture specifically designed to address the vanishing gradient problem
Answer
RNN

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Long Short-Term Memory (LSTM) is an RNN architecture specifically designed to address the vanishing gradient problem

Original toplevel document (pdf)

cannot see any pdfs