Edited, memorised or added to reading queue

on 06-Jun-2024 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#deep-learning #keras #lstm #python #sequence

Chapter 3 How to Prepare Data for LSTMs

3.0.1 Lesson Goal
The goal of this lesson is to teach you how to prepare sequence prediction data for use with LSTM models. After completing this lesson, you will know: How to scale numeric data and how to transform categorical data. How to pad and truncate input sequences with varied lengths. How to transform input sequences into a supervised learning problem.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 7629619727628

Tags
#deep-learning #keras #lstm #python #sequence
Question
Sequence classification involves predicting a [...] for a given input sequence.
Answer
class label

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Sequence classification involves predicting a class label for a given input sequence.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7629639650572

Tags
#has-images
Question
library(ggcharts)
(p <- bar_chart(cyl, cyl, pct))

Next, let’s try to change the axis labels to include a percentage sign using the ...

p + scale_y_continuous(labels = scales::[...])

Answer
percent

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Open it
pan> library(ggcharts) (p <- bar_chart(cyl, cyl, pct)) Copy Next, let’s try to change the axis labels to include a percentage sign using the ... p + scale_y_continuous(labels = scales::percent) Copy <span>







Flashcard 7629643058444

Tags
#abm #agent-based #machine-learning #model #priority
Question
Neural networks can also be used for nonlinear adaptive control in [...] systems
Answer
multi-agent

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Neural networks can also be used for nonlinear adaptive control in multi-agent systems

Original toplevel document (pdf)

cannot see any pdfs







#causality #statistics
ignorability - we can make this assumption realistic by running randomized experiments, which force the treatment to not be caused by anything but a coin toss, so then we have the causal structure shown in Figure 2.2.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
how realistic of an assumption is it? In general, it is completely unrealistic because there is likely to be confounding in most data we observe (causal structure shown in Figure 2.1). However, <span>we can make this assumption realistic by running randomized experiments, which force the treatment to not be caused by anything but a coin toss, so then we have the causal structure shown in Figure 2.2. We cover randomized experiments in greater depth in Chapter 5. <span>

Original toplevel document (pdf)

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
Truncated Backpropagation Through Time, or TBPTT, is a modified version of the BPTT training algorithm
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Truncated Backpropagation Through Time, or TBPTT, is a modified version of the BPTT training algorithm for recurrent neural networks where the sequence is processed one time step at a time and periodically an update is performed back for a fixed number of time steps

Original toplevel document (pdf)

cannot see any pdfs




#causality #statistics
Association still flows in exactly the same way in Bayesian networks as it does in causal graphs, though. In both, association flows along chains and forks, unless a node is conditioned on. And in both, a collider blocks the flow of association, unless it is conditioned on.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Regular Bayesian networks are purely statistical models, so we can only talk about the flow of association in Bayesian networks. Association still flows in exactly the same way in Bayesian networks as it does in causal graphs, though. In both, association flows along chains and forks, unless a node is conditioned on. And in both, a collider blocks the flow of association, unless it is conditioned on. Combining these building blocks, we get how association flows in general DAGs. We can tell if two nodes are not associated (no association flows between them) by whether or not they are

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7629760761100

Tags
#causality #statistics
Question
When we say “estimation,” we are referring to the process of moving from a statistical [...] to an estimate
Answer
estimand

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
When we say “estimation,” we are referring to the process of moving from a statistical estimand to an estimate

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7629767314700

Tags
#causality #statistics
Question
The graph with edges [...] is known as the manipulated graph
Answer
removed

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The graph with edges removed is known as the manipulated graph

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7629770984716

Tags
#causality #statistics
Question
[...] encompasses the assumption that is sometimes referred to as “no multiple versions of treatment.”
Answer
consistency

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
consistency encompasses the assumption that is sometimes referred to as “no multiple versions of treatment.”

Original toplevel document (pdf)

cannot see any pdfs







#CURRENT_READING #deep #keras #learning #tensorflow #tfc-II
model amalgamation. In this approach, models are broken into composable units that share and adapt components to achieve different objectives with the same initial data. The components are interconnected in a variety of connectivity patterns, in which each component learns communication interfaces between the models through design, without the necessity of a backend application
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep #keras #learning #tensorflow #tfc-II
In addition, model amalgamation can be used to train Internet of Things (IoT) devices for data enrichment, turning IoT sensors from static to dynamically learning devices — a technique called model fusion
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep #keras #learning #tensorflow #tfc-II
enterprise production is moving toward automatic learning for model development
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep #keras #learning #tensorflow #tfc-II
Model development for production continues to be a combination of automatic and hand-designed learning—which is often crucial for proprietary needs or advantages. But designing by hand does not mean starting from scratch; typically, you would start with a stock model and make tweaks and adjustments. To do this effectively, you need to know how the model works and why it works that way, the concepts that underlie its design, and the pros and cons of alternative building blocks you will learn from other SOTA models
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep #keras #learning #tensorflow #tfc-II
formerly state-of-the-art (SOTA) models
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deep #keras #learning #tensorflow #tfc-II
advancements in design patterns for structured data evolved with the introduction of the wide-and-deep model pattern, outlined in “Wide & Deep Learning for Recommender Systems” (https://arxiv.org/abs/1606.07792) by Heng- Tze Cheng et. al, at the technology-agnostic research group Google Research, in 2016
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs