Edited, memorised or added to reading queue

on 31-Oct-2024 (Thu)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 7629895503116

Tags
#causality #statistics
Question
Whenever, do(๐‘ก) appears after the [...], it means that everything in that expression is in the post-intervention world where the intervention do(๐‘ก) occurs.
Answer
conditioning bar

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Whenever, do(๐‘ก) appears after the conditioning bar, it means that everything in that expression is in the post-intervention world where the intervention do(๐‘ก) occurs.

Original toplevel document (pdf)

cannot see any pdfs







LSTM Weights
#deep-learning #keras #lstm #python #sequence

Input Weights.

Used to weight input for the current time step.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
1.4.1 LSTM Weights A memory cell has weight parameters for the input, output, as well as an internal state that is built up through exposure to input time steps. Input Weights. Used to weight input for the current time step. Output Weights. Used to weight the output from the last time step. Internal State. Internal state used in the calculation of the output for this time step

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7649822379276

Tags
#deep-learning #keras #lstm #python #sequence
Question
If the number of input and output time steps vary, then an Encoder-Decoder architecture can be used. The input time steps are mapped to a [...] internal representation of the sequence, then this vector is used as input to producing each time step in the output sequence
Answer
fixed sized

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
If the number of input and output time steps vary, then an Encoder-Decoder architecture can be used. The input time steps are mapped to a fixed sized internal representation of the sequence, then this vector is used as input to producing each time step in the output sequence

Original toplevel document (pdf)

cannot see any pdfs







#data-science #infrastructure
data scientists and engineers are expected to build end-to-end solutions to business problems, of which models are a small but important part.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
data scientists and engineers are expected to build end-to-end solutions to business problems, of which models are a small but important part. Because this book focuses on end-to-end solutions, we say that the data scientistโ€™s job is to build data science applications.

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7663768440076

Tags
#causality #has-images #statistics


Question
the causal effect estimate will be biased by the non-causal association that we induce when we condition on ๐‘ or any of [...]
Answer
its descendants

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the causal effect estimate will be biased by the non-causal association that we induce when we condition on ๐‘ or any of its descendants

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7663770275084

Tags
#has-images #tensorflow #tensorflow-certificate
[unknown IMAGE 7626420784396]
Question
Typical workflow: build a model -> fit it -> [...] -> tweak -> fit > evaluate -> ....
Answer
evaulate

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Typical workflow: build a model -> fit it -> evaulate -> tweak -> fit > evaluate -> ....

Original toplevel document

TfC 01 regression
activation functions # 2. Compiling: change optimizer or its parameters (eg. learning rate) # 3. Fitting: more epochs, more data ### How? # from smaller model to larger model Evaluating models <span>Typical workflow: build a model -> fit it -> evaulate -> tweak -> fit > evaluate -> .... Building model: experiment Evaluation model: visualize What can visualize? the data model itself the training of a model predictions ## The 3 sets (or actually 2 sets: training and test







Flashcard 7663772110092

Tags
#ggplot2
Question
The problem here is that by default scales::percent() [...] its input value by 100. This can be controlled by the scale parameter.
Answer
multiplies

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The problem here is that by default scales::percent() multiplies its input value by 100. This can be controlled by the scale parameter.

Original toplevel document

Open it
Something is not right here! 4000%!? That seems a bit excessive. The problem here is that by default scales::percent() multiplies its input value by 100. This can be controlled by the scale parameter. scales::percent(100, scale = 1) Copy ## [1] "100%" However, scale_y_continuous() expects a function as input for its labels parameter not the actual labels itself. Thus, using percent() is not an option anymore. Fortu







Flashcard 7663773682956

Tags
#ggplot2
Question
The problem here is that by default scales::[...] multiplies its input value by 100. This can be controlled by the scale parameter.
Answer
percent()

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The problem here is that by default scales::percent() multiplies its input value by 100. This can be controlled by the scale parameter.

Original toplevel document

Open it
Something is not right here! 4000%!? That seems a bit excessive. The problem here is that by default scales::percent() multiplies its input value by 100. This can be controlled by the scale parameter. scales::percent(100, scale = 1) Copy ## [1] "100%" However, scale_y_continuous() expects a function as input for its labels parameter not the actual labels itself. Thus, using percent() is not an option anymore. Fortu