Edited, memorised or added to reading list

on 10-Jan-2018 (Wed)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 1729598983436

Question
A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by [...], which is solely a function of time.
Answer
removing the underlying trend

This operation sounds too frequentist.. The uncertainty related to the trend is imposed on the trend removed process. Refer to Jaynes later.


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by removing the underlying trend, which is solely a function of time.

Original toplevel document

Stationary process - Wikipedia
rting. In the latter case of a deterministic trend, the process is called a trend stationary process, and stochastic shocks have only transitory effects after which the variable tends toward a deterministically evolving (non-constant) mean. <span>A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by removing the underlying trend, which is solely a function of time. Similarly, processes with one or more unit roots can be made stationary through differencing. An important type of non-stationary process that does not include a trend-like behavior is







Flashcard 1729609469196

Tags
#gaussian-process
Question
Importantly, a complicated covariance function can be defined as a [...] of other simpler covariance functions in order to incorporate different insights about the data-set at hand.
Answer
linear combination

Perhaps more than linear combination.


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Importantly, a complicated covariance function can be defined as a linear combination of other simpler covariance functions in order to incorporate different insights about the data-set at hand.

Original toplevel document

Gaussian process - Wikipedia
{\displaystyle \nu } and Γ ( ν ) {\displaystyle \Gamma (\nu )} is the gamma function evaluated at ν {\displaystyle \nu } . <span>Importantly, a complicated covariance function can be defined as a linear combination of other simpler covariance functions in order to incorporate different insights about the data-set at hand. Clearly, the inferential results are dependent on the values of the hyperparameters θ (e.g. ℓ and σ) defining the model's behaviour. A popular choice for θ is to provide maximum a pos







Flashcard 1731515518220

Question
Novelist Adam Langer [...] the publishing trade — and some of its recent trends — while digging toward something deeper.

fasten together or pierce with
informal subject to sharp criticism or critical analysis
Answer
skewers


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Novelist Adam Langer skewers the publishing trade — and some of its recent trends — while digging toward something deeper.

Original toplevel document

Book review: 'The Thieves of Manhattan' by Adam Langer - latimes
30+ years Terms of Service Privacy Policy YOU ARE HERE: LAT Home→Collections Book review: 'The Thieves of Manhattan' by Adam Langer <span>Novelist Adam Langer skewers the publishing trade — and some of its recent trends — while digging toward something deeper. July 18, 2010|By Ella Taylor, Special to the Los Angeles Times Email Share The Thieves of Manhattan A Novel Adam Langer Spiegel & Grau: 260 pp., $15 paper







Specifically, Probabilistic PCA assumes that each latent variable is normally distributed, \(\mathbf{z}_n \sim N(\mathbf{0}, \mathbf{I})\).
The corresponding data point is generated via a projection \( \mathbf{x}_n \mid \mathbf{z}_n \sim N(\mathbf{W}\mathbf{z}_n, \sigma^2\mathbf{I}) \),

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Edward – Probabilistic PCA
n \in \mathbb{R}^Dx​n​​∈R​D​​. We aim to represent each xn\mathbf{x}_nx​n​​ under a latent variable zn∈RK\mathbf{z}_n \in \mathbb{R}^Kz​n​​∈R​K​​ with lower dimension, K< DK##BAD TAG##\mathbf{W}W relates the latent variables to the data. <span>Specifically, we assume that each latent variable is normally distributed, zn∼N(0,I).\mathbf{z}_n \sim N(\mathbf{0}, \mathbf{I}).z​n​​∼N(0,I). The corresponding data point is generated via a projection, xn∣zn∼N(Wzn,σ2I),\mathbf{x}_n \mid \mathbf{z}_n \sim N(\mathbf{W}\mathbf{z}_n, \sigma^2\mathbf{I}),x​n​​∣z​n​​∼N(Wz​n​​,σ​2​​I), where the matrix W∈RD×K\mathbf{W}\in\mathbb{R}^{D\times K}W∈R​D×K​​ are known as the principal axes. In probabilistic PCA, we are typically interested in estimating the principal axes W




error covariance structure in PPCA is σ 2 I σ2I \sigma^2 \mathbf I and in FA it is an arbitrary diagonal matrix Ψ Ψ \boldsymbol \Psi .

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on




#sprectral-theorem #stochastics
Jump to: navigation, search

In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem[1][2] is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Karhunen–Loève theorem - Wikipedia
Karhunen–Loève theorem - Wikipedia Karhunen–Loève theorem From Wikipedia, the free encyclopedia (Redirected from Karhunen–Loeve expansion) Jump to: navigation, search In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem [1] [2] is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling Transform and Eigenvector Transform, and is closely related to Principal Component Analysis (PCA) technique widely used in image processing




#fourier-analysis

When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval:

The functions f and g are orthogonal when this integral is zero: As with a basis of vectors in a finite-dimensional space, orthogonal functions can form an infinite basis for a function space.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Orthogonal functions - Wikipedia
ia (Redirected from Orthogonal function) Jump to: navigation, search In mathematics, orthogonal functions belong to a function space which is a vector space (usually over R) that has a bilinear form. <span>When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval: ⟨ f , g ⟩ = ∫ f ( x ) g ( x ) d x . {\displaystyle \langle f,g\rangle =\int f(x)g(x)\,dx.} The functions f and g are orthogonal when this integral is zero: ⟨ f , g ⟩ = 0. {\displaystyle \langle f,\ g\rangle =0.} As with a basis of vectors in a finite-dimensional space, orthogonal functions can form an infinite basis for a function space. Suppose {f n }, n = 0, 1, 2, … is a sequence of orthogonal functions. If f n has positive support then ⟨ f n




Marcus Aurelius ( / ɔː ˈ r iː l i ə s / ; Latin: Marcus Aurelius Antoninus Augustus ;[6][notes 1][9] 26 April 121 – 17 March 180 AD) was Roman emperor from 161 to 180

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Marcus Aurelius - Wikipedia
esar (as imperial heir) Imperator Caesar Marcus Aurelius Antoninus Augustus (upon joint ascension, with Lucius Verus, to the throne [4] ) Dynasty Antonine Father Marcus Annius Verus Antoninus Pius (adoptive) Mother Domitia Lucilla <span>Marcus Aurelius (/ɔːˈriːliəs/; Latin: Marcus Aurelius Antoninus Augustus; [6] [notes 1] [9] 26 April 121 – 17 March 180 AD) was Roman emperor from 161 to 180, ruling jointly with Lucius Verus until Verus' death in 169 and jointly with his son, Commodus, from 177. He was the last of the so-called Five Good Emperors. He was a practitioner of




Flashcard 1731595996428

Question
Marcus Aurelius was Roman emperor from [...] to [...]
Answer
161 to 180


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Marcus Aurelius ( / ɔː ˈ r iː l i ə s / ; Latin: Marcus Aurelius Antoninus Augustus ; [6] [notes 1] [9] 26 April 121 – 17 March 180 AD) was Roman emperor from 161 to 180

Original toplevel document

Marcus Aurelius - Wikipedia
esar (as imperial heir) Imperator Caesar Marcus Aurelius Antoninus Augustus (upon joint ascension, with Lucius Verus, to the throne [4] ) Dynasty Antonine Father Marcus Annius Verus Antoninus Pius (adoptive) Mother Domitia Lucilla <span>Marcus Aurelius (/ɔːˈriːliəs/; Latin: Marcus Aurelius Antoninus Augustus; [6] [notes 1] [9] 26 April 121 – 17 March 180 AD) was Roman emperor from 161 to 180, ruling jointly with Lucius Verus until Verus' death in 169 and jointly with his son, Commodus, from 177. He was the last of the so-called Five Good Emperors. He was a practitioner of







The polar decomposition of a square complex matrix A is a matrix decomposition of the form

A = U P {\displaystyle A=UP\,}

where U is a unitary matrix and P is a positive-semidefinite Hermitian matrix.[1] Intuitively, the polar decomposition separates A into a component that stretches the space along a set of orthogonal axes, represented by P, and a rotation (with possible reflection) represented by U. The decomposition of the complex conjugate of A {\displaystyle A} is given by A ¯ = U ¯ {\displaystyle {\overline {A}}={\overline {U}}} P ¯ {\displaystyle {\overline {P}}} .

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Polar decomposition - Wikipedia
ors on Hilbert space 3 Unbounded operators 4 Quaternion polar decomposition 5 Alternative planar decompositions 6 Numerical determination of the matrix polar decomposition 7 See also 8 References Matrix polar decomposition[edit source] <span>The polar decomposition of a square complex matrix A is a matrix decomposition of the form A = U P {\displaystyle A=UP\,} where U is a unitary matrix and P is a positive-semidefinite Hermitian matrix. [1] Intuitively, the polar decomposition separates A into a component that stretches the space along a set of orthogonal axes, represented by P, and a rotation (with possible reflection) represented by U. The decomposition of the complex conjugate of A {\displaystyle A} is given by A ¯ = U ¯ {\displaystyle {\overline {A}}={\overline {U}}} P ¯ {\displaystyle {\overline {P}}} . This decomposition always exists; and so long as A is invertible, it is unique, with P positive-definite. Note that det A = det




The fourth page of the booklets provided six five-point rating scales on which participants were asked to rate the best answer to the following questions: (1) How good are you at learning languages? (1 ¼ very good–5 ¼ very bad). (2) How interesting do you find learning languages? (1 ¼ very interesting–5 ¼ very uninteresting). (3) How easy did you find this method of learning languages? (1 ¼ very easy–5 ¼ very hard). (4) How quick did you find this method of learning languages? (1 ¼ very quick–5 ¼ very slow). (5) How enjoyable did you find this method of learning languages? (1 ¼ very enjoyable–5 ¼ very boring). (6) How useful did you find this method of learning languages? (1 ¼ very useful–5 ¼ useless).

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs