Edited, memorised or added to reading queue

on 04-Feb-2026 (Wed)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#deep-learning #keras #lstm #python #sequence
positive and negative shifts can be used to create a new DataFrame from a time series with sequences of input and output patterns for a supervised learning problem. This permits not only classical X -> y prediction, but also X -> Y where both input and output can be sequences
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
y, in time series forecasting terminology the current time ( t ) and future times ( t+1 , t+n ) are forecast times and past observations ( t-1 , t-n ) are used to make forecasts. We can see how <span>positive and negative shifts can be used to create a new DataFrame from a time series with sequences of input and output patterns for a supervised learning problem. This permits not only classical X -> y prediction, but also X -> Y where both input and output can be sequences. Further, the shift function also works on so-called multivariate time series problems. That is where instead of having one set of observations for a time series, we have multiple (e.g.

Original toplevel document (pdf)

cannot see any pdfs




#deep-learning #keras #lstm #python #sequence
the shift function also works on so-called multivariate time series problems. That is where instead of having one set of observations for a time series, we have multiple (e.g. temperature and pressure). All variates in the time series can be shifted forward or backward to create multivariate input and output sequences
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
s of input and output patterns for a supervised learning problem. This permits not only classical X -> y prediction, but also X -> Y where both input and output can be sequences. Further, <span>the shift function also works on so-called multivariate time series problems. That is where instead of having one set of observations for a time series, we have multiple (e.g. temperature and pressure). All variates in the time series can be shifted forward or backward to create multivariate input and output sequences <span>

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 7794963123468

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
Many alternative model specifications and network architectures offer the promises of improvements over vanilla LSTM models. They have already been proven superior in some domains. Such alternative specifications include Gated Recurrent Units, BiLSTM (Siami-Namini, Tavakoli, & Namin, 2019), Multi-Dimensional LSTM (Graves & Schmidhuber, 2009), Neural Turing Machines (Graves, Wayne, & Danihelka, 2014), [...] RNN and its various implementations (e.g., Bahdanau, Cho, & Bengio, 2014; Luong, Pham, & Manning, 2015), or Transformers (Vaswani et al., 2017).
Answer
Attention-Based

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ated Recurrent Units, BiLSTM (Siami-Namini, Tavakoli, & Namin, 2019), Multi-Dimensional LSTM (Graves & Schmidhuber, 2009), Neural Turing Machines (Graves, Wayne, & Danihelka, 2014), <span>Attention-Based RNN and its various implementations (e.g., Bahdanau, Cho, & Bengio, 2014; Luong, Pham, & Manning, 2015), or Transformers (Vaswani et al., 2017). <span>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7794966531340

Tags
#feature-engineering #lstm #recurrent-neural-networks #rnn
Question
For natural language processing, an RNN would encode the sentence “A black cat jumped on the table” as a sequence of [...] vectors (x 1 , x 2 , … x 7 ), where each word would be represented as a single non-zero value in a sparse vector
Answer
seven

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
For natural language processing, an RNN would encode the sentence “A black cat jumped on the table” as a sequence of seven vectors (x 1 , x 2 , … x 7 ), where each word would be represented as a single non-zero value in a sparse vector

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7794968890636

Tags
#deep-learning #keras #lstm #python #sequence
Question
Time steps. These are the [...] for a feature, such as lag variables.
Answer
past observations

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Time steps. These are the past observations for a feature, such as lag variables.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7794970725644

Tags
#deep-learning #keras #lstm #python #sequence
Question
Increasing the depth of the network provides an alternate solution that requires fewer neurons and trains [...]
Answer
faster

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Increasing the depth of the network provides an alternate solution that requires fewer neurons and trains faster

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 7794972560652

Tags
#tensorflow #tensorflow-certificate
Question

Preprocessing steps (preparing data for neural networks):

  1. Turn all data into numbers
  2. Make sure your tensors are in the right shape
  3. [...] features (normalize or standardize) Neural networks tend to prefer normalization.
Answer
Scale

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Preprocessing steps (preparing data for neural networks): Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization.

Original toplevel document

TfC_01_FINAL_EXAMPLE.ipynb
ape # Create training and test datasets #my way: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42) <span>Preprocessing data (normalization and standardization) Preprocessing steps: Turn all data into numbers Make sure your tensors are in the right shape Scale features (normalize or standardize) Neural networks tend to prefer normalization. Normalization - adjusting values measured on different scales to a notionally common scale Normalization # Start from scratch import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf ## Borrow a few classes from sci-kit learn from sklearn.compose import mak







#git #software-engineering

1. Złota zasada: Atomic Commits

Commit powinien być "atomowy", co oznacza, że jest to najmniejsza możliwa jednostka zmiany, która ma sens biznesowy lub techniczny i nie psuje działania aplikacji.

  • Jeden commit = Jedno zadanie logiczne.

  • Jeśli naprawiasz błąd w logowaniu i jednocześnie poprawiasz literówkę w stopce strony – to powinny być dwa osobne commity.

  • Jeśli dodajesz nową funkcjonalność, która wymaga zmian w pliku HTML, CSS i JavaScript – to powinien być jeden commit (ponieważ te zmiany są ze sobą nierozerwalnie związane).

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Commity w git
mitów. W świecie inżynierii oprogramowania to podejście nazywa się Atomic Commits (Commity Atomowe). Oto szczegółowe wyjaśnienie, dlaczego to podejście jest lepsze i jak je stosować w praktyce. <span>1. Złota zasada: Atomic Commits Commit powinien być "atomowy", co oznacza, że jest to najmniejsza możliwa jednostka zmiany, która ma sens biznesowy lub techniczny i nie psuje działania aplikacji. Jeden commit = Jedno zadanie logiczne. Jeśli naprawiasz błąd w logowaniu i jednocześnie poprawiasz literówkę w stopce strony – to powinny być dwa osobne commity. Jeśli dodajesz nową funkcjonalność, która wymaga zmian w pliku HTML, CSS i JavaScript – to powinien być jeden commit (ponieważ te zmiany są ze sobą nierozerwalnie związane). 2. Dlaczego małe commity są lepsze? Podejście "wielka paczka zmian" (tzw. Mega-Commit) jest ryzykowne i utrudnia pracę zespołową. Oto dlaczego małe zmiany wygrywają: Łatwiejsze Code Rev




Flashcard 7794976492812

Tags
#ML-engineering #ML_in_Action #learning #machine #software-engineering
Question
In the experimentation phase, the largest causes of project failure are either the experimentation taking [...] (testing too many things or spending too long fine- tuning an approach)
Answer
too long

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In the experimentation phase, the largest causes of project failure are either the experimentation taking too long (testing too many things or spending too long fine- tuning an approach)

Original toplevel document (pdf)

cannot see any pdfs