Edited, memorised or added to reading queue

on 07-Jan-2018 (Sun)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#linear-algebra #matrix-decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful e.g. for efficient numerical solutions and Monte Carlo simulations.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Cholesky decomposition - Wikipedia
ikipedia ocultar siempre | ocultar ahora Cholesky decomposition From Wikipedia, the free encyclopedia Jump to: navigation, search <span>In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful e.g. for efficient numerical solutions and Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems




#matrix-decomposition

Hermitian matrices can be understood as the complex extension of real symmetric matrices.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Hermitian matrix - Wikipedia
A = A T ¯ {\displaystyle A={\overline {A^{\text{T}}}}} , in matrix form. <span>Hermitian matrices can be understood as the complex extension of real symmetric matrices. If the conjugate transpose of a matrix A {\displaystyle A} is denoted by A H




#kalman-filter
Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Kalman filter - Wikipedia
into account; P k ∣ k − 1 {\displaystyle P_{k\mid k-1}} is the corresponding uncertainty. <span>Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, one of the primary developers of its theory. The Kalman filter has numerous applications in technology. A common application is for guidanc




"The differentiation of computational graphs that used the Cholesky decomposition required a new Op, which we contributed." (Matthews et al 2017:3)

"We also contributed code that enabled GPU solving of matrix triangular systems, which in some cases is the bottleneck for approximate inference." (Matthews et al 2017:3)

"Note that all the MCMC based inference methods support a Bayesian prior on the hyperparameters, whereas all other methods assume a point estimate. Our main MCMC method is Hamiltonian Monte Carlo (Neal, 2010)." (Matthews et al 2017:4)

"The object-oriented paradigm, whilst very natural for the Python layer of the code, has a dierent emphasis to that of a computational graph which is arguably closer to a functional concept. The largely functional computational graph, and a object-oriented interface need to live cleanly together in GP ow. This is achieved through the Param class that allows parameters to be both properties of an ob ject that can be manipulated as such and Variables in a TensorFlow graph that can be optimized." (Matthews et al 2017:4)

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Since the objective function is unknown, the Bayesian strategy (of optimisation) is to treat it as a random function and place a prior over it. The prior captures our beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines what the next query point should be.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Bayesian optimization - Wikipedia
erences 8 External links History[edit source] The term is generally attributed to Jonas Mockus and is coined in his work from a series of publications on global optimization in the 1970s and 1980s. [2] [3] [4] Strategy[edit source] <span>Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it. The prior captures our beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines what the next query point should be. Examples[edit source] Examples of acquisition functions include probability of improvement, expected improvement, Bayesian expected losses, upper confidence bounds (UCB), Thompson s




#linear-algebra #matrix-decomposition

The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form

where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L.

Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition.[2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero.[3]

When A has real entries, L has real entries as well, and the factorization may be written A = LLT.[4]

The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite.

The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Cholesky decomposition - Wikipedia
s 7 Generalization 8 Implementations in programming languages 9 See also 10 Notes 11 References 12 External links 12.1 History of science 12.2 Information 12.3 Computer code 12.4 Use of the matrix in simulation 12.5 Online calculators <span>Statement[edit source] The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form A = L L ∗ , {\displaystyle \mathbf {A} =\mathbf {LL} ^{*},} where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. [2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero. [3] When A has real entries, L has real entries as well, and the factorization may be written A = LL T . [4] The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. LDL decomposition[edit source] A closely related variant of the classical Cholesky decomposition is the LDL decomposition, A =




#matrix-decomposition
In numerical analysis and linear algebra, LU decomposition (where 'LU' stands for 'lower upper', and also called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

LU decomposition - Wikipedia
on - Wikipedia ocultar siempre | ocultar ahora LU decomposition From Wikipedia, the free encyclopedia Jump to: navigation, search <span>In numerical analysis and linear algebra, LU decomposition (where 'LU' stands for 'lower upper', and also called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. The LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of lin




#matrix-decomposition
Computers usually solve square systems of linear equations using the LU decomposition, and it is also a key step when inverting a matrix, or computing the determinant of a matrix.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

LU decomposition - Wikipedia
rization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. The LU decomposition can be viewed as the matrix form of Gaussian elimination. <span>Computers usually solve square systems of linear equations using the LU decomposition, and it is also a key step when inverting a matrix, or computing the determinant of a matrix. The LU decomposition was introduced by mathematician Tadeusz Banachiewicz in 1938. [1] Contents [hide] 1 Definitions 1.1 LU factorization with Partial Pivoting 1.2 LU facto




Expecatation propagation aims to minimise \(KL(p(x,\theta\mid y,m)\parallel q(x,\theta\mid m))\). The problem with this minimisation, is that \( p(x,\theta\mid y,m)\) is unknown, so the first thing we do is to force some structure on this distribution, by making assumptions. For many models with IID observation, the posterior can be factorised as
$$ p(\theta \mid D) = \frac{1}{p(D)} p(\theta) \prod_{i=1}^{N} p(x^{(i)} \mid \theta) $$
which is a product of prior and likelihood, normalised by the data likelihood. Ignoring the constant, we can rearrange the posterior as
$$ p(\theta \mid D) \propto \prod_{i=0}^{N} f_i(\theta) $$
with \(f_0(\theta) = p(\theta) \) and \( f_i(\theta) = p(x^{(i)} \mid \theta) .\) And now our goal is to find a factorable
$$q(\theta) = \prod_{i=0}^{N} \tilde{f_i}(\theta) $$
that is close to our posterior.
EP manages this with a Gibbs sampling like procedure by
$$ \min_{\tilde{f_i}(\theta)} KL \left( f_i(\theta) \prod_{j \neq i}^{N} \tilde{f_j}(\theta) \parallel \tilde{f_i}(\theta) \prod_{j \neq i}^{N} \tilde{f_j}(\theta) \right) $$
and updating one factor at a time. The complete proceduce is as follows:

  1. ​​​​Initialise $f_0(\theta) \cdots f_N(\theta) $; Initialise $\tilde{f_0}(\theta) = f_0(\theta) $ and $\tilde{f_i}(\theta) = 1 $ for $i > 0$
  2. Compute $q(\theta) = \prod_{i=0}^{N} \tilde{f_i}(\theta) $
  3. for $i = 0 \cdots N $ do
    1. Deletion: delete current variable from joint distribution: $q_{\i}(\theta) \leftarrow \frac{q(\theta)}{\tilde{f_i}(\theta)} = \prod_{j \neq i}^{N} \tilde{f_j}(\theta) $
    2. Projection: update current variable by minimising the KL divergence $\min_{\tilde{f_i}(\theta)} KL \left( f_i(\theta) q_{\\i}(\theta) \parallel \tilde{f_i}(\theta) q_{\\i}(\theta) \right) $
    3. Inclusion: update $q(\theta)$ using the current variable distribution $q(\theta ) \leftarrow \tilde{f_i^{new}} (\theta) q_{\\i}(\theta)

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




EP finds approximations to a probability distribution. It uses an iterative approach that leverages the factorization structure of the target distribution.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Expectation propagation - Wikipedia
Expectation propagation From Wikipedia, the free encyclopedia Jump to: navigation, search Expectation propagation (EP) is a technique in Bayesian machine learning. <span>EP finds approximations to a probability distribution. It uses an iterative approach that leverages the factorization structure of the target distribution. It differs from other Bayesian approximation approaches such as Variational Bayesian methods. References[edit source] Thomas Minka (August 2–5, 2001). "Expectation Propagation




VB generally requiring more iterations, however sometimes each iteration is much cheaper than an EP iteration.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

bayesian - variational bayes vs expectation propagation - Cross Validated
8 down vote accepted It depends a lot on the details of the problem being solved. You can find a tabular comparison between them here, which links to more information. You are right about <span>VB generally requiring more iterations, however sometimes each iteration is much cheaper than an EP iteration. Your third bullet is worded like a disadvantage of EP but it's more like an advantage---you can use the same approximating family as VB if you want, but you don't have to.




#variational-inference
Variational Bayes can be seen as an extension of the EM (expectation-maximization) algorithm from maximum a posteriori estimation (MAP estimation) of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entire posterior distribution of the parameters and latent variables.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Variational Bayesian methods - Wikipedia
om. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, Variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior. <span>Variational Bayes can be seen as an extension of the EM (expectation-maximization) algorithm from maximum a posteriori estimation (MAP estimation) of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entire posterior distribution of the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be s