# on 28-Mar-2018 (Wed)

#### Annotation 1729501203724

 #multivariate-normal-distribution One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution.

Multivariate normal distribution - Wikipedia
a }}\mathbf {t} {\Big )}} In probability theory and statistics, the multivariate normal distribution or multivariate Gaussian distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. <span>One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly)

#### Annotation 1729503300876

 #multivariate-normal-distribution The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value

Multivariate normal distribution - Wikipedia
e definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. <span>The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value. Contents [hide] 1 Notation and parametrization 2 Definition 3 Properties 3.1 Density function 3.1.1 Non-degenerate case 3.1.2 Degenerate case 3.2 Higher moments 3.3 Lik

#### Annotation 1729504873740

 #multivariate-normal-distribution The mutual information of a distribution is a special case of the Kullback–Leibler divergence in which is the full multivariate distribution and is the product of the 1-dimensional marginal distributions

Multivariate normal distribution - Wikipedia
al {CN}}_{0}\|{\mathcal {CN}}_{1})=\operatorname {tr} \left({\boldsymbol {\Sigma }}_{1}^{-1}{\boldsymbol {\Sigma }}_{0}\right)-k+\ln {|{\boldsymbol {\Sigma }}_{1}| \over |{\boldsymbol {\Sigma }}_{0}|}.} Mutual information[edit source] <span>The mutual information of a distribution is a special case of the Kullback–Leibler divergence in which P {\displaystyle P} is the full multivariate distribution and Q {\displaystyle Q} is the product of the 1-dimensional marginal distributions. In the notation of the Kullback–Leibler divergence section of this article, Σ 1

#### Annotation 1729506970892

 #multivariate-normal-distribution In the bivariate case the expression for the mutual information is:

Multivariate normal distribution - Wikipedia
ldsymbol {\rho }}_{0}} is the correlation matrix constructed from Σ 0 {\displaystyle {\boldsymbol {\Sigma }}_{0}} . <span>In the bivariate case the expression for the mutual information is: I ( x ; y ) = − 1 2 ln ⁡ ( 1 − ρ 2 ) . {\displaystyle I(x;y)=-{1 \over 2}\ln(1-\rho ^{2}).} Cumulative distribution function[edit source] The notion of cumulative distribution function (cdf) in dimension 1 can be extended in two ways to the multidimensional case, based

#### Annotation 1729523485964

 #multivariate-normal-distribution Conditional distributions If N-dimensional x is partitioned as follows and accordingly μ and Σ are partitioned as follows then the distribution of x1 conditional on x2 = a is multivariate normal (x1 | x2 = a) ~ N( μ , Σ ) where and covariance matrix This matrix is the Schur complement of Σ22 in Σ. This means that to calculate the conditional covariance matrix, one inverts the overall covariance matrix, drops the rows and columns corresponding to the variables being conditioned upon, and then inverts back to get the conditional covariance matrix. Here is the generalized inverse of . Note that knowing that x2 = a alters the variance, though the new variance does not depend on the specific value of a; perhaps more surprisingly, the mean is shifted by ; compare this with the situation of not knowing the value of a, in which case x1 would have distribution . An interesting fact derived in order to prove this result, is that the random vectors and are independent. The matrix Σ12Σ22−1 is known as the matrix of regression...

Multivariate normal distribution - Wikipedia
y two or more of its components that are pairwise independent are independent. But, as pointed out just above, it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent. <span>Conditional distributions[edit source] If N-dimensional x is partitioned as follows x = [ x 1 x 2 ] with sizes [ q × 1 ( N − q ) × 1 ] {\displaystyle \mathbf {x} ={\begin{bmatrix}\mathbf {x} _{1}\\\mathbf {x} _{2}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times 1\$$N-q)\times 1\end{bmatrix}}} and accordingly μ and Σ are partitioned as follows μ = [ μ 1 μ 2 ] with sizes [ q × 1 ( N − q ) × 1 ] {\displaystyle {\boldsymbol {\mu }}={\begin{bmatrix}{\boldsymbol {\mu }}_{1}\\{\boldsymbol {\mu }}_{2}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times 1\\(N-q)\times 1\end{bmatrix}}} Σ = [ Σ 11 Σ 12 Σ 21 Σ 22 ] with sizes [ q × q q × ( N − q ) ( N − q ) × q ( N − q ) × ( N − q ) ] {\displaystyle {\boldsymbol {\Sigma }}={\begin{bmatrix}{\boldsymbol {\Sigma }}_{11}&{\boldsymbol {\Sigma }}_{12}\\{\boldsymbol {\Sigma }}_{21}&{\boldsymbol {\Sigma }}_{22}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times q&q\times (N-q)\\(N-q)\times q&(N-q)\times (N-q)\end{bmatrix}}} then the distribution of x 1 conditional on x 2 = a is multivariate normal (x 1 | x 2 = a) ~ N(μ, Σ) where μ ¯ = μ 1 + Σ 12 Σ 22 − 1 ( a − μ 2 ) {\displaystyle {\bar {\boldsymbol {\mu }}}={\boldsymbol {\mu }}_{1}+{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\left(\mathbf {a} -{\boldsymbol {\mu }}_{2}\right)} and covariance matrix Σ ¯ = Σ 11 − Σ 12 Σ 22 − 1 Σ 21 . {\displaystyle {\overline {\boldsymbol {\Sigma }}}={\boldsymbol {\Sigma }}_{11}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}{\boldsymbol {\Sigma }}_{21}.} [13] This matrix is the Schur complement of Σ 22 in Σ. This means that to calculate the conditional covariance matrix, one inverts the overall covariance matrix, drops the rows and columns corresponding to the variables being conditioned upon, and then inverts back to get the conditional covariance matrix. Here Σ 22 − 1 {\displaystyle {\boldsymbol {\Sigma }}_{22}^{-1}} is the generalized inverse of Σ 22 {\displaystyle {\boldsymbol {\Sigma }}_{22}} . Note that knowing that x 2 = a alters the variance, though the new variance does not depend on the specific value of a; perhaps more surprisingly, the mean is shifted by Σ 12 Σ 22 − 1 ( a − μ 2 ) {\displaystyle {\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\left(\mathbf {a} -{\boldsymbol {\mu }}_{2}\right)} ; compare this with the situation of not knowing the value of a, in which case x 1 would have distribution N q ( μ 1 , Σ 11 ) {\displaystyle {\mathcal {N}}_{q}\left({\boldsymbol {\mu }}_{1},{\boldsymbol {\Sigma }}_{11}\right)} . An interesting fact derived in order to prove this result, is that the random vectors x 2 {\displaystyle \mathbf {x} _{2}} and y 1 = x 1 − Σ 12 Σ 22 − 1 x 2 {\displaystyle \mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14] #### Annotation 1729525845260  #multivariate-normal-distribution To obtain the marginal distribution over a subset of multivariate normal random variables, one only needs to drop the irrelevant variables (the variables that one wants to marginalize out) from the mean vector and the covariance matrix. The proof for this follows from the definitions of multivariate normal distributions and linear algebra.[16] status not read Multivariate normal distribution - Wikipedia ) {\displaystyle \operatorname {E} (X_{1}\mid X_{2}##BAD TAG##\rho E(X_{2}\mid X_{2}##BAD TAG##} and then using the properties of the expectation of a truncated normal distribution. Marginal distributions[edit source] <span>To obtain the marginal distribution over a subset of multivariate normal random variables, one only needs to drop the irrelevant variables (the variables that one wants to marginalize out) from the mean vector and the covariance matrix. The proof for this follows from the definitions of multivariate normal distributions and linear algebra. [16] Example Let X = [X 1 , X 2 , X 3 ] be multivariate normal random variables with mean vector μ = [μ 1 , μ 2 , μ 3 ] and covariance matrix Σ (standard parametrization for multivariate #### Annotation 1729527942412  #multivariate-normal-distribution If Y = c + BX is an affine transformation of where c is an vector of constants and B is a constant matrix, then Y has a multivariate normal distribution with expected value c + Bμ and variance BΣBT . Corollaries: sums of Gaussian are Gaussian, marginals of Gaussian are Gaussian. status not read Multivariate normal distribution - Wikipedia {\displaystyle {\boldsymbol {\Sigma }}'={\begin{bmatrix}{\boldsymbol {\Sigma }}_{11}&{\boldsymbol {\Sigma }}_{13}\\{\boldsymbol {\Sigma }}_{31}&{\boldsymbol {\Sigma }}_{33}\end{bmatrix}}} . Affine transformation[edit source] <span>If Y = c + BX is an affine transformation of X ∼ N ( μ , Σ ) , {\displaystyle \mathbf {X} \ \sim {\mathcal {N}}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }}),} where c is an M × 1 {\displaystyle M\times 1} vector of constants and B is a constant M × N {\displaystyle M\times N} matrix, then Y has a multivariate normal distribution with expected value c + Bμ and variance BΣB T i.e., Y ∼ N ( c + B μ , B Σ B T ) {\displaystyle \mathbf {Y} \sim {\mathcal {N}}\left(\mathbf {c} +\mathbf {B} {\boldsymbol {\mu }},\mathbf {B} {\boldsymbol {\Sigma }}\mathbf {B} ^{\rm {T}}\right)} . In particular, any subset of the X i has a marginal distribution that is also multivariate normal. To see this, consider the following example: to extract the subset (X 1 , X 2 , X 4 ) #### Annotation 1729530039564  #multivariate-normal-distribution The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. status not read Multivariate normal distribution - Wikipedia implies that the variance of the dot product must be positive. An affine transformation of X such as 2X is not the same as the sum of two independent realisations of X. Geometric interpretation[edit source] See also: Confidence region <span>The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. [17] Hence the multivariate normal distribution is an example of the class of elliptical distributions. The directions of the principal axes of the ellipsoids are given by the eigenvec #### Annotation 1729532136716  #multivariate-normal-distribution The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix Σ. The squared relative lengths of the principal axes are given by the corresponding eigenvalues. status not read Multivariate normal distribution - Wikipedia urs of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. [17] Hence the multivariate normal distribution is an example of the class of elliptical distributions. <span>The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix Σ. The squared relative lengths of the principal axes are given by the corresponding eigenvalues. If Σ = UΛU T = UΛ 1/2 (UΛ 1/2 ) T is an eigendecomposition where the columns of U are unit eigenvectors and Λ is a diagonal matrix of the eigenvalues, then we have #### Annotation 1729534496012  #multivariate-normal-distribution The distribution N(μ, Σ) is in effect N(0, I) scaled by Λ1/2, rotated by U and translated by μ. status not read Multivariate normal distribution - Wikipedia {\mu }}+\mathbf {U} {\mathcal {N}}(0,{\boldsymbol {\Lambda }}).} Moreover, U can be chosen to be a rotation matrix, as inverting an axis does not have any effect on N(0, Λ), but inverting a column changes the sign of U's determinant. <span>The distribution N(μ, Σ) is in effect N(0, I) scaled by Λ 1/2 , rotated by U and translated by μ. Conversely, any choice of μ, full rank matrix U, and positive diagonal entries Λ i yields a non-singular multivariate normal distribution. If any Λ i is zero and U is square, the re #### Annotation 1729553632524  #fourier-analysis n probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. status not read Characteristic function (probability theory) - Wikipedia aracteristic function of a uniform U(–1,1) random variable. This function is real-valued because it corresponds to a random variable that is symmetric around the origin; however characteristic functions may generally be complex-valued. I<span>n probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides the basis of #### Annotation 1729555729676  #fourier-analysis If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. status not read Characteristic function (probability theory) - Wikipedia c around the origin; however characteristic functions may generally be complex-valued. In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. <span>If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There ar #### Annotation 1729578011916  #multivariate-normal-distribution In the bivariate case where x is partitioned into X1 and X2, the conditional distribution of X1 given X2 is where is the correlation coefficient between X1 and X2. status not read Multivariate normal distribution - Wikipedia mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] <span>In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14] X 1 ∣ X 2 = x 2 ∼ N ( μ 1 + σ 1 σ 2 ρ ( x 2 − μ 2 ) , ( 1 − ρ 2 ) σ 1 2 ) . {\displaystyle X_{1}\mid X_{2}=x_{2}\ \sim \ {\mathcal {N}}\left(\mu _{1}+{\frac {\sigma _{1}}{\sigma _{2}}}\rho (x_{2}-\mu _{2}),\,(1-\rho ^{2})\sigma _{1}^{2}\right).} where ρ {\displaystyle \rho } is the correlation coefficient between X 1 and X 2 . Bivariate conditional expectation[edit source] In the general case[edit source] ( #### Flashcard 1729607109900 Tags #multivariate-normal-distribution Question In the bivariate case, the conditional mean of X1 given X2 is [...] Answer \( \mu_1 + \frac{\sigma_1}{\sigma_2} \rho (x_2 - \mu_2)$$

where is the correlation coefficient between X1 and X2.
Apparently both the correlation and variance should play a part!

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is where is the correlation coefficient between X 1 and X 2 .

#### Original toplevel document

Multivariate normal distribution - Wikipedia
mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] <span>In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14] X 1 ∣ X 2 = x 2 ∼ N ( μ 1 + σ 1 σ 2 ρ ( x 2 − μ 2 ) , ( 1 − ρ 2 ) σ 1 2 ) . {\displaystyle X_{1}\mid X_{2}=x_{2}\ \sim \ {\mathcal {N}}\left(\mu _{1}+{\frac {\sigma _{1}}{\sigma _{2}}}\rho (x_{2}-\mu _{2}),\,(1-\rho ^{2})\sigma _{1}^{2}\right).} where ρ {\displaystyle \rho } is the correlation coefficient between X 1 and X 2 . Bivariate conditional expectation[edit source] In the general case[edit source] (

#### Flashcard 1729631751436

Tags
#fourier-analysis
Question
If a random variable admits a probability density function, then the [...] is the Fourier transform of the probability density function.
characteristic function

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function.

#### Original toplevel document

Characteristic function (probability theory) - Wikipedia
c around the origin; however characteristic functions may generally be complex-valued. In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. <span>If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There ar

#### Flashcard 1729633324300

Tags
#fourier-analysis
Question
the characteristic function of any real-valued random variable completely defines its [...].

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
n probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution.

#### Original toplevel document

Characteristic function (probability theory) - Wikipedia
aracteristic function of a uniform U(–1,1) random variable. This function is real-valued because it corresponds to a random variable that is symmetric around the origin; however characteristic functions may generally be complex-valued. I<span>n probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides the basis of

#### Flashcard 1729664257292

Tags
#multivariate-normal-distribution
Question
The distribution N(μ, Σ) is in effect N(0, I) scaled by [...] , rotated by [...] and translated by [...] .
Λ1/2, U , μ.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The distribution N(μ, Σ) is in effect N(0, I) scaled by Λ 1/2 , rotated by U and translated by μ.

#### Original toplevel document

Multivariate normal distribution - Wikipedia
{\mu }}+\mathbf {U} {\mathcal {N}}(0,{\boldsymbol {\Lambda }}).} Moreover, U can be chosen to be a rotation matrix, as inverting an axis does not have any effect on N(0, Λ), but inverting a column changes the sign of U's determinant. <span>The distribution N(μ, Σ) is in effect N(0, I) scaled by Λ 1/2 , rotated by U and translated by μ. Conversely, any choice of μ, full rank matrix U, and positive diagonal entries Λ i yields a non-singular multivariate normal distribution. If any Λ i is zero and U is square, the re

#### Flashcard 1729666616588

Tags
#multivariate-normal-distribution
Question
The directions of the principal axes of the ellipsoids are given by [...] of the covariance matrix Σ
the eigenvectors

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix Σ. The squared relative lengths of the principal axes are given by the corresponding eigenvalues.

#### Original toplevel document

Multivariate normal distribution - Wikipedia
urs of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. [17] Hence the multivariate normal distribution is an example of the class of elliptical distributions. <span>The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix Σ. The squared relative lengths of the principal axes are given by the corresponding eigenvalues. If Σ = UΛU T = UΛ 1/2 (UΛ 1/2 ) T is an eigendecomposition where the columns of U are unit eigenvectors and Λ is a diagonal matrix of the eigenvalues, then we have

#### Flashcard 1729668189452

Tags
#multivariate-normal-distribution
Question
[...] of the principal axes are given by the corresponding eigenvalues.
The squared relative lengths

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix Σ. The squared relative lengths of the principal axes are given by the corresponding eigenvalues.

#### Original toplevel document

Multivariate normal distribution - Wikipedia
urs of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. [17] Hence the multivariate normal distribution is an example of the class of elliptical distributions. <span>The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix Σ. The squared relative lengths of the principal axes are given by the corresponding eigenvalues. If Σ = UΛU T = UΛ 1/2 (UΛ 1/2 ) T is an eigendecomposition where the columns of U are unit eigenvectors and Λ is a diagonal matrix of the eigenvalues, then we have

#### Flashcard 1729669762316

Tags
#multivariate-normal-distribution
Question
The equidensity contours of a non-singular multivariate normal distribution are [...] centered at the mean.
ellipsoids

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean.

#### Original toplevel document

Multivariate normal distribution - Wikipedia
implies that the variance of the dot product must be positive. An affine transformation of X such as 2X is not the same as the sum of two independent realisations of X. Geometric interpretation[edit source] See also: Confidence region <span>The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean. [17] Hence the multivariate normal distribution is an example of the class of elliptical distributions. The directions of the principal axes of the ellipsoids are given by the eigenvec

#### Flashcard 1729672908044

Tags
#multivariate-normal-distribution
Question
If Y = c + BX is an affine transformation,
then Y has a multivariate normal distribution with expected value [...]
c +

Corollaries: sums of Gaussian are Gaussian, marginals of Gaussian are Gaussian.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
If Y = c + BX is an affine transformation of where c is an vector of constants and B is a constant matrix, then Y has a multivariate normal distribution with expected value c + Bμ and variance BΣB T . Corollaries: sums of Gaussian are Gaussian, marginals of Gaussian are Gaussian.

#### Original toplevel document

Multivariate normal distribution - Wikipedia
{\displaystyle {\boldsymbol {\Sigma }}'={\begin{bmatrix}{\boldsymbol {\Sigma }}_{11}&{\boldsymbol {\Sigma }}_{13}\\{\boldsymbol {\Sigma }}_{31}&{\boldsymbol {\Sigma }}_{33}\end{bmatrix}}} . Affine transformation[edit source] <span>If Y = c + BX is an affine transformation of X ∼ N ( μ , Σ ) , {\displaystyle \mathbf {X} \ \sim {\mathcal {N}}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }}),} where c is an M × 1 {\displaystyle M\times 1} vector of constants and B is a constant M × N {\displaystyle M\times N} matrix, then Y has a multivariate normal distribution with expected value c + Bμ and variance BΣB T i.e., Y ∼ N ( c + B μ , B Σ B T ) {\displaystyle \mathbf {Y} \sim {\mathcal {N}}\left(\mathbf {c} +\mathbf {B} {\boldsymbol {\mu }},\mathbf {B} {\boldsymbol {\Sigma }}\mathbf {B} ^{\rm {T}}\right)} . In particular, any subset of the X i has a marginal distribution that is also multivariate normal. To see this, consider the following example: to extract the subset (X 1 , X 2 , X 4 )

#### Flashcard 1729674480908

Tags
#multivariate-normal-distribution
Question
If Y = c + BX, then Y has variance [...]
BΣBT

Corollaries: sums of Gaussian are Gaussian, marginals of Gaussian are Gaussian.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
><head> If Y = c + BX is an affine transformation of where c is an vector of constants and B is a constant matrix, then Y has a multivariate normal distribution with expected value c + Bμ and variance BΣB T . Corollaries: sums of Gaussian are Gaussian, marginals of Gaussian are Gaussian. <html>

#### Original toplevel document

Multivariate normal distribution - Wikipedia
{\displaystyle {\boldsymbol {\Sigma }}'={\begin{bmatrix}{\boldsymbol {\Sigma }}_{11}&{\boldsymbol {\Sigma }}_{13}\\{\boldsymbol {\Sigma }}_{31}&{\boldsymbol {\Sigma }}_{33}\end{bmatrix}}} . Affine transformation[edit source] <span>If Y = c + BX is an affine transformation of X ∼ N ( μ , Σ ) , {\displaystyle \mathbf {X} \ \sim {\mathcal {N}}({\boldsymbol {\mu }},{\boldsymbol {\Sigma }}),} where c is an M × 1 {\displaystyle M\times 1} vector of constants and B is a constant M × N {\displaystyle M\times N} matrix, then Y has a multivariate normal distribution with expected value c + Bμ and variance BΣB T i.e., Y ∼ N ( c + B μ , B Σ B T ) {\displaystyle \mathbf {Y} \sim {\mathcal {N}}\left(\mathbf {c} +\mathbf {B} {\boldsymbol {\mu }},\mathbf {B} {\boldsymbol {\Sigma }}\mathbf {B} ^{\rm {T}}\right)} . In particular, any subset of the X i has a marginal distribution that is also multivariate normal. To see this, consider the following example: to extract the subset (X 1 , X 2 , X 4 )

#### Flashcard 1729676053772

Tags
#multivariate-normal-distribution
Question

To obtain the marginal distribution over a subset of multivariate normal random variables, one only needs to [...]

drop the irrelevant variables

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
To obtain the marginal distribution over a subset of multivariate normal random variables, one only needs to drop the irrelevant variables (the variables that one wants to marginalize out) from the mean vector and the covariance matrix. The proof for this follows from the definitions of multivariate normal distributions an

#### Original toplevel document

Multivariate normal distribution - Wikipedia
) {\displaystyle \operatorname {E} (X_{1}\mid X_{2}##BAD TAG##\rho E(X_{2}\mid X_{2}##BAD TAG##} and then using the properties of the expectation of a truncated normal distribution. Marginal distributions[edit source] <span>To obtain the marginal distribution over a subset of multivariate normal random variables, one only needs to drop the irrelevant variables (the variables that one wants to marginalize out) from the mean vector and the covariance matrix. The proof for this follows from the definitions of multivariate normal distributions and linear algebra. [16] Example Let X = [X 1 , X 2 , X 3 ] be multivariate normal random variables with mean vector μ = [μ 1 , μ 2 , μ 3 ] and covariance matrix Σ (standard parametrization for multivariate

#### Annotation 1729678413068

 #multivariate-normal-distribution the distribution of x1 conditional on x2 = a is multivariate normal (x1 | x2 = a) ~ N( μ , Σ ) where and covariance matrix

#### Parent (intermediate) annotation

Open it
Conditional distributions If N-dimensional x is partitioned as follows and accordingly μ and Σ are partitioned as follows then the distribution of x 1 conditional on x 2 = a is multivariate normal (x 1 | x 2 = a) ~ N( μ , Σ ) where and covariance matrix This matrix is the Schur complement of Σ 22 in Σ. This means that to calculate the conditional covariance matrix, one inverts the overall covariance matrix, drops t

#### Original toplevel document

Multivariate normal distribution - Wikipedia
y two or more of its components that are pairwise independent are independent. But, as pointed out just above, it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent. <span>Conditional distributions[edit source] If N-dimensional x is partitioned as follows x = [ x 1 x 2 ] with sizes [ q × 1 ( N − q ) × 1 ] {\displaystyle \mathbf {x} ={\begin{bmatrix}\mathbf {x} _{1}\\\mathbf {x} _{2}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times 1\\(N-q)\times 1\end{bmatrix}}} and accordingly μ and Σ are partitioned as follows μ = [ μ 1 μ 2 ] with sizes [ q × 1 ( N − q ) × 1 ] {\displaystyle {\boldsymbol {\mu }}={\begin{bmatrix}{\boldsymbol {\mu }}_{1}\\{\boldsymbol {\mu }}_{2}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times 1\\(N-q)\times 1\end{bmatrix}}} Σ = [ Σ 11 Σ 12 Σ 21 Σ 22 ] with sizes [ q × q q × ( N − q ) ( N − q ) × q ( N − q ) × ( N − q ) ] {\displaystyle {\boldsymbol {\Sigma }}={\begin{bmatrix}{\boldsymbol {\Sigma }}_{11}&{\boldsymbol {\Sigma }}_{12}\\{\boldsymbol {\Sigma }}_{21}&{\boldsymbol {\Sigma }}_{22}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times q&q\times (N-q)\\(N-q)\times q&(N-q)\times (N-q)\end{bmatrix}}} then the distribution of x 1 conditional on x 2 = a is multivariate normal (x 1 | x 2 = a) ~ N(μ, Σ) where μ ¯ = μ 1 + Σ 12 Σ 22 − 1 ( a − μ 2 ) {\displaystyle {\bar {\boldsymbol {\mu }}}={\boldsymbol {\mu }}_{1}+{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\left(\mathbf {a} -{\boldsymbol {\mu }}_{2}\right)} and covariance matrix Σ ¯ = Σ 11 − Σ 12 Σ 22 − 1 Σ 21 . {\displaystyle {\overline {\boldsymbol {\Sigma }}}={\boldsymbol {\Sigma }}_{11}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}{\boldsymbol {\Sigma }}_{21}.} [13] This matrix is the Schur complement of Σ 22 in Σ. This means that to calculate the conditional covariance matrix, one inverts the overall covariance matrix, drops the rows and columns corresponding to the variables being conditioned upon, and then inverts back to get the conditional covariance matrix. Here Σ 22 − 1 {\displaystyle {\boldsymbol {\Sigma }}_{22}^{-1}} is the generalized inverse of Σ 22 {\displaystyle {\boldsymbol {\Sigma }}_{22}} . Note that knowing that x 2 = a alters the variance, though the new variance does not depend on the specific value of a; perhaps more surprisingly, the mean is shifted by Σ 12 Σ 22 − 1 ( a − μ 2 ) {\displaystyle {\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\left(\mathbf {a} -{\boldsymbol {\mu }}_{2}\right)} ; compare this with the situation of not knowing the value of a, in which case x 1 would have distribution N q ( μ 1 , Σ 11 ) {\displaystyle {\mathcal {N}}_{q}\left({\boldsymbol {\mu }}_{1},{\boldsymbol {\Sigma }}_{11}\right)} . An interesting fact derived in order to prove this result, is that the random vectors x 2 {\displaystyle \mathbf {x} _{2}} and y 1 = x 1 − Σ 12 Σ 22 − 1 x 2 {\displaystyle \mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14]

#### Flashcard 1729680772364

Tags
#multivariate-normal-distribution
Question

the distribution of x1 conditional on x2 = a is multivariate normal (x1 | x2 = a) ~ N( μ , Σ ) where μ [...] and covariance matrix Σ [...]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
the distribution of x 1 conditional on x 2 = a is multivariate normal (x 1 | x 2 = a) ~ N( μ , Σ ) where and covariance matrix

#### Original toplevel document

Multivariate normal distribution - Wikipedia
y two or more of its components that are pairwise independent are independent. But, as pointed out just above, it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent. <span>Conditional distributions[edit source] If N-dimensional x is partitioned as follows x = [ x 1 x 2 ] with sizes [ q × 1 ( N − q ) × 1 ] {\displaystyle \mathbf {x} ={\begin{bmatrix}\mathbf {x} _{1}\\\mathbf {x} _{2}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times 1\\(N-q)\times 1\end{bmatrix}}} and accordingly μ and Σ are partitioned as follows μ = [ μ 1 μ 2 ] with sizes [ q × 1 ( N − q ) × 1 ] {\displaystyle {\boldsymbol {\mu }}={\begin{bmatrix}{\boldsymbol {\mu }}_{1}\\{\boldsymbol {\mu }}_{2}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times 1\\(N-q)\times 1\end{bmatrix}}} Σ = [ Σ 11 Σ 12 Σ 21 Σ 22 ] with sizes [ q × q q × ( N − q ) ( N − q ) × q ( N − q ) × ( N − q ) ] {\displaystyle {\boldsymbol {\Sigma }}={\begin{bmatrix}{\boldsymbol {\Sigma }}_{11}&{\boldsymbol {\Sigma }}_{12}\\{\boldsymbol {\Sigma }}_{21}&{\boldsymbol {\Sigma }}_{22}\end{bmatrix}}{\text{ with sizes }}{\begin{bmatrix}q\times q&q\times (N-q)\\(N-q)\times q&(N-q)\times (N-q)\end{bmatrix}}} then the distribution of x 1 conditional on x 2 = a is multivariate normal (x 1 | x 2 = a) ~ N(μ, Σ) where μ ¯ = μ 1 + Σ 12 Σ 22 − 1 ( a − μ 2 ) {\displaystyle {\bar {\boldsymbol {\mu }}}={\boldsymbol {\mu }}_{1}+{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\left(\mathbf {a} -{\boldsymbol {\mu }}_{2}\right)} and covariance matrix Σ ¯ = Σ 11 − Σ 12 Σ 22 − 1 Σ 21 . {\displaystyle {\overline {\boldsymbol {\Sigma }}}={\boldsymbol {\Sigma }}_{11}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}{\boldsymbol {\Sigma }}_{21}.} [13] This matrix is the Schur complement of Σ 22 in Σ. This means that to calculate the conditional covariance matrix, one inverts the overall covariance matrix, drops the rows and columns corresponding to the variables being conditioned upon, and then inverts back to get the conditional covariance matrix. Here Σ 22 − 1 {\displaystyle {\boldsymbol {\Sigma }}_{22}^{-1}} is the generalized inverse of Σ 22 {\displaystyle {\boldsymbol {\Sigma }}_{22}} . Note that knowing that x 2 = a alters the variance, though the new variance does not depend on the specific value of a; perhaps more surprisingly, the mean is shifted by Σ 12 Σ 22 − 1 ( a − μ 2 ) {\displaystyle {\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\left(\mathbf {a} -{\boldsymbol {\mu }}_{2}\right)} ; compare this with the situation of not knowing the value of a, in which case x 1 would have distribution N q ( μ 1 , Σ 11 ) {\displaystyle {\mathcal {N}}_{q}\left({\boldsymbol {\mu }}_{1},{\boldsymbol {\Sigma }}_{11}\right)} . An interesting fact derived in order to prove this result, is that the random vectors x 2 {\displaystyle \mathbf {x} _{2}} and y 1 = x 1 − Σ 12 Σ 22 − 1 x 2 {\displaystyle \mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14]

#### Flashcard 1729692830988

Tags
#multivariate-normal-distribution
Question

In the bivariate normal case the expression for the mutual information is [...]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In the bivariate case the expression for the mutual information is:

#### Original toplevel document

Multivariate normal distribution - Wikipedia
ldsymbol {\rho }}_{0}} is the correlation matrix constructed from Σ 0 {\displaystyle {\boldsymbol {\Sigma }}_{0}} . <span>In the bivariate case the expression for the mutual information is: I ( x ; y ) = − 1 2 ln ⁡ ( 1 − ρ 2 ) . {\displaystyle I(x;y)=-{1 \over 2}\ln(1-\rho ^{2}).} Cumulative distribution function[edit source] The notion of cumulative distribution function (cdf) in dimension 1 can be extended in two ways to the multidimensional case, based

#### Flashcard 1729696763148

Tags
#multivariate-normal-distribution
Question
The mutual information of a distribution is a special case of the Kullback–Leibler divergence in which is [...] and is [...]
the full multivariate distribution, the product of the 1-dimensional marginal distributions

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The mutual information of a distribution is a special case of the Kullback–Leibler divergence in which is the full multivariate distribution and is the product of the 1-dimensional marginal distributions

#### Original toplevel document

Multivariate normal distribution - Wikipedia
al {CN}}_{0}\|{\mathcal {CN}}_{1})=\operatorname {tr} \left({\boldsymbol {\Sigma }}_{1}^{-1}{\boldsymbol {\Sigma }}_{0}\right)-k+\ln {|{\boldsymbol {\Sigma }}_{1}| \over |{\boldsymbol {\Sigma }}_{0}|}.} Mutual information[edit source] <span>The mutual information of a distribution is a special case of the Kullback–Leibler divergence in which P {\displaystyle P} is the full multivariate distribution and Q {\displaystyle Q} is the product of the 1-dimensional marginal distributions. In the notation of the Kullback–Leibler divergence section of this article, Σ 1

#### Flashcard 1729699122444

Tags
#multivariate-normal-distribution
Question
The multivariate normal distribution is often used to describe correlated real-valued random variables each of which [...]
clusters around a mean value

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value

#### Original toplevel document

Multivariate normal distribution - Wikipedia
e definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. <span>The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value. Contents [hide] 1 Notation and parametrization 2 Definition 3 Properties 3.1 Density function 3.1.1 Non-degenerate case 3.1.2 Degenerate case 3.2 Higher moments 3.3 Lik

#### Flashcard 1729700695308

Tags
#multivariate-normal-distribution
Question
a random vector is said to be k-variate normally distributed if [...] has a univariate normal distribution.
every linear combination of its k components

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution.

#### Original toplevel document

Multivariate normal distribution - Wikipedia
a }}\mathbf {t} {\Big )}} In probability theory and statistics, the multivariate normal distribution or multivariate Gaussian distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. <span>One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly)

#### Annotation 1729709870348

 #singular-value-decomposition In linear algebra, the singular-value decomposition (SVD) generalises the eigendecomposition of a positive semidefinite normal matrix (for example, a symmetric matrix with positive eigenvalues) to any matrix via an extension of the polar decomposition.

Singular-value decomposition - Wikipedia
nto three simple transformations: an initial rotation V ∗ , a scaling Σ along the coordinate axes, and a final rotation U. The lengths σ 1 and σ 2 of the semi-axes of the ellipse are the singular values of M, namely Σ 1,1 and Σ 2,2 . <span>In linear algebra, the singular-value decomposition (SVD) is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix (for example, a symmetric matrix with positive eigenvalues) to any m × n {\displaystyle m\times n} matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics. Formally, the singular-value decomposition of an m × n {\d

#### Annotation 1729711967500

 #singular-value-decomposition Formally, the singular-value decomposition of an real or complex matrix is a factorization of the form , where is an real or complex unitary matrix, is a rectangular diagonal matrix with non-negative real numbers on the diagonal, and is an real or complex unitary matrix.

Singular-value decomposition - Wikipedia
ositive eigenvalues) to any m × n {\displaystyle m\times n} matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics. <span>Formally, the singular-value decomposition of an m × n {\displaystyle m\times n} real or complex matrix M {\displaystyle \mathbf {M} } is a factorization of the form U Σ V ∗ {\displaystyle \mathbf {U\Sigma V^{*}} } , where U {\displaystyle \mathbf {U} } is an m × m {\displaystyle m\times m} real or complex unitary matrix, Σ {\displaystyle \mathbf {\Sigma } } is a m × n {\displaystyle m\times n} rectangular diagonal matrix with non-negative real numbers on the diagonal, and V {\displaystyle \mathbf {V} } is an n × n {\displaystyle n\times n} real or complex unitary matrix. The diagonal entries σ i {\displaystyle \sigma _{i}} of

#### Flashcard 1729714326796

Tags
#singular-value-decomposition
Question
[...] generalises eigendecomposition of a positive semidefinite normal matrix to any matrix
singular-value decomposition (SVD)

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In linear algebra, the singular-value decomposition (SVD) generalises the eigendecomposition of a positive semidefinite normal matrix (for example, a symmetric matrix with positive eigenvalues) to any matrix via an extension of the polar deco

#### Original toplevel document

Singular-value decomposition - Wikipedia
nto three simple transformations: an initial rotation V ∗ , a scaling Σ along the coordinate axes, and a final rotation U. The lengths σ 1 and σ 2 of the semi-axes of the ellipse are the singular values of M, namely Σ 1,1 and Σ 2,2 . <span>In linear algebra, the singular-value decomposition (SVD) is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix (for example, a symmetric matrix with positive eigenvalues) to any m × n {\displaystyle m\times n} matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics. Formally, the singular-value decomposition of an m × n {\d

#### Annotation 1729716686092

 #singular-value-decomposition Formally, the singular-value decomposition of an real or complex matrix is a factorization of the form

#### Parent (intermediate) annotation

Open it
Formally, the singular-value decomposition of an real or complex matrix is a factorization of the form , where is an real or complex unitary matrix, is a rectangular diagonal matrix with non-negative real numbers on the diagonal, and is an real or complex unitary matrix.

#### Original toplevel document

Singular-value decomposition - Wikipedia
ositive eigenvalues) to any m × n {\displaystyle m\times n} matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics. <span>Formally, the singular-value decomposition of an m × n {\displaystyle m\times n} real or complex matrix M {\displaystyle \mathbf {M} } is a factorization of the form U Σ V ∗ {\displaystyle \mathbf {U\Sigma V^{*}} } , where U {\displaystyle \mathbf {U} } is an m × m {\displaystyle m\times m} real or complex unitary matrix, Σ {\displaystyle \mathbf {\Sigma } } is a m × n {\displaystyle m\times n} rectangular diagonal matrix with non-negative real numbers on the diagonal, and V {\displaystyle \mathbf {V} } is an n × n {\displaystyle n\times n} real or complex unitary matrix. The diagonal entries σ i {\displaystyle \sigma _{i}} of

#### Flashcard 1729718258956

Tags
#singular-value-decomposition
Question
singular-value decomposition factorises an matrix M to the form [...]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Formally, the singular-value decomposition of an real or complex matrix is a factorization of the form

#### Original toplevel document

Singular-value decomposition - Wikipedia
ositive eigenvalues) to any m × n {\displaystyle m\times n} matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics. <span>Formally, the singular-value decomposition of an m × n {\displaystyle m\times n} real or complex matrix M {\displaystyle \mathbf {M} } is a factorization of the form U Σ V ∗ {\displaystyle \mathbf {U\Sigma V^{*}} } , where U {\displaystyle \mathbf {U} } is an m × m {\displaystyle m\times m} real or complex unitary matrix, Σ {\displaystyle \mathbf {\Sigma } } is a m × n {\displaystyle m\times n} rectangular diagonal matrix with non-negative real numbers on the diagonal, and V {\displaystyle \mathbf {V} } is an n × n {\displaystyle n\times n} real or complex unitary matrix. The diagonal entries σ i {\displaystyle \sigma _{i}} of

#### Flashcard 1729720618252

Tags
#singular-value-decomposition
Question
With a factorization of the form , , , represent [...]
real or complex unitary matrix
real or complex unitary matrix.
rectangular diagonal matrix

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Formally, the singular-value decomposition of an real or complex matrix is a factorization of the form , where is an real or complex unitary matrix, is a rectangular diagonal matrix with non-negative real numbers on the diagonal, and is an real or complex unitary matrix.

#### Original toplevel document

Singular-value decomposition - Wikipedia
ositive eigenvalues) to any m × n {\displaystyle m\times n} matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics. <span>Formally, the singular-value decomposition of an m × n {\displaystyle m\times n} real or complex matrix M {\displaystyle \mathbf {M} } is a factorization of the form U Σ V ∗ {\displaystyle \mathbf {U\Sigma V^{*}} } , where U {\displaystyle \mathbf {U} } is an m × m {\displaystyle m\times m} real or complex unitary matrix, Σ {\displaystyle \mathbf {\Sigma } } is a m × n {\displaystyle m\times n} rectangular diagonal matrix with non-negative real numbers on the diagonal, and V {\displaystyle \mathbf {V} } is an n × n {\displaystyle n\times n} real or complex unitary matrix. The diagonal entries σ i {\displaystyle \sigma _{i}} of

#### Annotation 1729799261452

 #kalman-filter Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe.

Kalman filter - Wikipedia
into account; P k ∣ k − 1 {\displaystyle P_{k\mid k-1}} is the corresponding uncertainty. <span>Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, one of the primary developers of its theory. The Kalman filter has numerous applications in technology. A common application is for guidanc

#### Annotation 1729833078028

 #matrix-decomposition In numerical analysis and linear algebra, LU decomposition (where 'LU' stands for 'lower upper', and also called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix.

LU decomposition - Wikipedia
on - Wikipedia ocultar siempre | ocultar ahora LU decomposition From Wikipedia, the free encyclopedia Jump to: navigation, search <span>In numerical analysis and linear algebra, LU decomposition (where 'LU' stands for 'lower upper', and also called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. The LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of lin

#### Annotation 1729835175180

 #matrix-decomposition Computers usually solve square systems of linear equations using the LU decomposition, and it is also a key step when inverting a matrix, or computing the determinant of a matrix.

LU decomposition - Wikipedia
rization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. The LU decomposition can be viewed as the matrix form of Gaussian elimination. <span>Computers usually solve square systems of linear equations using the LU decomposition, and it is also a key step when inverting a matrix, or computing the determinant of a matrix. The LU decomposition was introduced by mathematician Tadeusz Banachiewicz in 1938. [1] Contents [hide] 1 Definitions 1.1 LU factorization with Partial Pivoting 1.2 LU facto

#### Flashcard 1730170981644

Tags
#matrix-decomposition
Question
Computers usually solve square systems of linear equations using [...]
the LU decomposition

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Computers usually solve square systems of linear equations using the LU decomposition, and it is also a key step when inverting a matrix, or computing the determinant of a matrix.

#### Original toplevel document

LU decomposition - Wikipedia
rization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. The LU decomposition can be viewed as the matrix form of Gaussian elimination. <span>Computers usually solve square systems of linear equations using the LU decomposition, and it is also a key step when inverting a matrix, or computing the determinant of a matrix. The LU decomposition was introduced by mathematician Tadeusz Banachiewicz in 1938. [1] Contents [hide] 1 Definitions 1.1 LU factorization with Partial Pivoting 1.2 LU facto

#### Flashcard 1730172554508

Tags
#matrix-decomposition
Question
LU decomposition factors a matrix as the product of [...] and [...] .
a lower and an upper triangular matrixmatrix.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In numerical analysis and linear algebra, LU decomposition (where 'LU' stands for 'lower upper', and also called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix.

#### Original toplevel document

LU decomposition - Wikipedia
on - Wikipedia ocultar siempre | ocultar ahora LU decomposition From Wikipedia, the free encyclopedia Jump to: navigation, search <span>In numerical analysis and linear algebra, LU decomposition (where 'LU' stands for 'lower upper', and also called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. The LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of lin

#### Annotation 1731026881804

 #calculus Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians.

Euler's formula - Wikipedia
s formula half-lives exponential growth and decay Defining e proof that e is irrational representations of e Lindemann–Weierstrass theorem People John Napier Leonhard Euler Related topics Schanuel's conjecture v t e <span>Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. This complex exponential function is sometimes denoted cis x ("cosine plus i sine"). The formula is still valid if x is a complex number, and so some authors refer to the more

#### Annotation 1731435564300

 #singular-value-decomposition SVD as change of coordinates The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : Kn → Km one can find orthonormal bases of Kn and Km such that T maps the i-th basis vector of Kn to a non-negative multiple of the i-th basis vector of Km , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries.

Singular-value decomposition - Wikipedia
m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where σ i is the i-th diagonal entry of Σ, and T(V i ) = 0 for i > min(m,n). <span>The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavour of singular values and SVD factorization — at least when working on real vector spaces — consider the sphere S of radius one in R n . The linear map T map

#### Flashcard 1731451292940

Tags
#singular-value-decomposition
Question
geometrically SVD finds [...] for every linear map T : KnKm
orthonormal bases of Kn and Km

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
SVD as change of coordinates The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases,

#### Original toplevel document

Singular-value decomposition - Wikipedia
m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where σ i is the i-th diagonal entry of Σ, and T(V i ) = 0 for i > min(m,n). <span>The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavour of singular values and SVD factorization — at least when working on real vector spaces — consider the sphere S of radius one in R n . The linear map T map

#### Flashcard 1731453127948

Tags
#singular-value-decomposition
Question
Geometrically SVD finds orthonormal bases of Kn and Km for every linear map T : KnKm such that T maps the i-th basis vector of Kn to a non-negative multiple of the i-th basis vector of Km , and sends the left-over basis vectors to [...].
zero

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
ollows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to <span>zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. <span><body><html>

#### Original toplevel document

Singular-value decomposition - Wikipedia
m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where σ i is the i-th diagonal entry of Σ, and T(V i ) = 0 for i > min(m,n). <span>The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavour of singular values and SVD factorization — at least when working on real vector spaces — consider the sphere S of radius one in R n . The linear map T map

#### Flashcard 1731454700812

Tags
#singular-value-decomposition
Question
With SVD geometrically every linear map T : KnKm is represented by a diagonal matrix with [...] entries.
non-negative real diagonal

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
h that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with <span>non-negative real diagonal entries. <span><body><html>

#### Original toplevel document

Singular-value decomposition - Wikipedia
m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where σ i is the i-th diagonal entry of Σ, and T(V i ) = 0 for i > min(m,n). <span>The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavour of singular values and SVD factorization — at least when working on real vector spaces — consider the sphere S of radius one in R n . The linear map T map

#### Annotation 1731457584396

 #matrix In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P−1AP is a diagonal matrix.

Diagonalizable matrix - Wikipedia

#### Flashcard 1731460205836

Tags
#matrix
Question
a square matrix A is called diagonalizable if it is similar to [...]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1 AP is a diagonal matrix.

#### Original toplevel document

Diagonalizable matrix - Wikipedia

#### Flashcard 1731484323084

Tags
#spanish
Question
use conditional simple to [...]:
María me dijo que estaría en casa para las 11, pero no ha aparecido aún.
refer to the future from the past

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
To refer to the future from a moment in the past: María me dijo que estaría en casa para las 11, pero no ha aparecido aún (María told me she’d be at home by 11, but she hasn’t turned up yet).

#### Original toplevel document

Open it
e can postpone our trip for some hours, but we would arrive quite late). To express uncertainty in the past: No sabía si estarías en la oficina, por eso no te llamé (I didn’t know whether you’d be at the office. That’s why I didn’t call you). <span>To refer to the future from a moment in the past: María me dijo que estaría en casa para las 11, pero no ha aparecido aún (María told me she’d be at home by 11, but she hasn’t turned up yet). <span><body><html>

#### Flashcard 1731535703308

Tags
#variational-inference
Question
Variational Bayesian methods are a family of techniques for approximating [...] arising in Bayesian inference and machine learning.
intractable integrals

In Bayesian inference this manifests as calculating marginal posteriors

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning.

#### Original toplevel document

Variational Bayesian methods - Wikipedia
f references, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations. (September 2010) (Learn how and when to remove this template message) <span>Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually termed "data") as well as unknown parameters and latent variables, with various

#### Flashcard 1732530801932

Tags
#spanish
Question
- El Ministro de Economía [...] a su cargo. (Él no quiere seguir trabajando como Ministro)
renunció

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
- El Ministro de Economía renunció a su cargo. (Él no quiere seguir trabajando como Ministro)

#### Original toplevel document

Unknown title
Cancel 0 comment(s) Show previous comments Please enter between 2 and 2000 characters. Characters remaining: 2000 Submit Cancel Answers Time: oldest to newest Time: newest to oldest Votes: highest to lowest [imagelink] Hola Svetlana,<span>- Ella rechazó su ayuda pues él no tenía buenas intenciones. (Ella no quiso aceptar la ayuda que él le ofrecía)- El Ministro de Economía renunció a su cargo. (Él no quiere seguir trabajando como Ministro)- Aunque él le explicó sus razones, ella le negó su ayuda. (Ella no quiso ayudarlo).Espero sea de ayuda! Please enter between 2 and 2000 characters. If you copy an answer from another italki page, please include the URL of the original page. Characters remaining: 1673 U

#### Annotation 1732621765900

 #distributions The compound distribution ("unconditional distribution") is the result of marginalizing (integrating) over the latent random variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution").

Compound probability distribution - Wikipedia
istribution) is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables. <span>The compound distribution ("unconditional distribution") is the result of marginalizing (integrating) over the latent random variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution"). Contents [hide] 1 Definition 2 Properties 3 Applications 3.1 Testing 3.2 Overdispersion modeling 3.3 Bayesian inference 3.4 Convolution 4 Computation 5 Examples 6 See als

#### Flashcard 1732623338764

Tags
#distributions
Question
The compound distribution is the result of [...] the latent random variables representing the parameters of the parametrized distribution
marginalizing out

Also called integrating over

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The compound distribution ("unconditional distribution") is the result of marginalizing (integrating) over the latent random variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution").

#### Original toplevel document

Compound probability distribution - Wikipedia
istribution) is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables. <span>The compound distribution ("unconditional distribution") is the result of marginalizing (integrating) over the latent random variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution"). Contents [hide] 1 Definition 2 Properties 3 Applications 3.1 Testing 3.2 Overdispersion modeling 3.3 Bayesian inference 3.4 Convolution 4 Computation 5 Examples 6 See als

#### Flashcard 1732624911628

Tags
#distributions
Question
The compound distribution is integrated over [...] that represents the parameters of the parametrized distribution
latent random variables

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The compound distribution ("unconditional distribution") is the result of marginalizing (integrating) over the latent random variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution").

#### Original toplevel document

Compound probability distribution - Wikipedia
istribution) is the probability distribution that results from assuming that a random variable is distributed according to some parametrized distribution, with (some of) the parameters of that distribution themselves being random variables. <span>The compound distribution ("unconditional distribution") is the result of marginalizing (integrating) over the latent random variable(s) representing the parameter(s) of the parametrized distribution ("conditional distribution"). Contents [hide] 1 Definition 2 Properties 3 Applications 3.1 Testing 3.2 Overdispersion modeling 3.3 Bayesian inference 3.4 Convolution 4 Computation 5 Examples 6 See als

#### Flashcard 1733059022092

Tags
#kalman-filter
Question
Kalman filtering estimats a [...] over the variables for each timeframe.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
hat uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating <span>a joint probability distribution over the variables for each timeframe. <span><body><html>

#### Original toplevel document

Kalman filter - Wikipedia
into account; P k ∣ k − 1 {\displaystyle P_{k\mid k-1}} is the corresponding uncertainty. <span>Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, one of the primary developers of its theory. The Kalman filter has numerous applications in technology. A common application is for guidanc

#### Annotation 1735706676492

 #topology a set is open if it doesn't contain any of its boundary points

Open set - Wikipedia
set is an abstract concept generalizing the idea of an open interval in the real line. The simplest example is in metric spaces, where open sets can be defined as those sets which contain a ball around each of their points (or, equivalently, <span>a set is open if it doesn't contain any of its boundary points); however, an open set, in general, can be very abstract: any collection of sets can be called open, as long as the union of an arbitrary number of open sets is open, the intersection o

#### Flashcard 1735708773644

Tags
#topology
Question
a set is open if it doesn't contain any of its [...]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
a set is open if it doesn't contain any of its boundary points

#### Original toplevel document

Open set - Wikipedia
set is an abstract concept generalizing the idea of an open interval in the real line. The simplest example is in metric spaces, where open sets can be defined as those sets which contain a ball around each of their points (or, equivalently, <span>a set is open if it doesn't contain any of its boundary points); however, an open set, in general, can be very abstract: any collection of sets can be called open, as long as the union of an arbitrary number of open sets is open, the intersection o

#### Flashcard 1735825165580

Tags
#logic
Question
Instead of justification of ideas, early modern authors emphasise the role of [...] and [...]
novelty and individual discovery

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Instead, early modern authors emphasise the role of novelty and individual discovery

#### Original toplevel document

The rise and fall and rise of logic | Aeon Essays
tually unthinkable before the wide availability of printed books) was well-established. Moreover, as indicated by the passage from Descartes above, the very term ‘logic’ came to be used for something other than what the scholastics had meant. <span>Instead, early modern authors emphasise the role of novelty and individual discovery, as exemplified by the influential textbook Port-Royal Logic (1662), essentially, the logical version of Cartesianism, based on Descartes’s conception of mental operations and the prima

#### Flashcard 1736031472908

Tags
#stochastics
Question
With an nonhomogeneous Poisson process, the [...] of points of the process is no longer constant.
average density

The density is determined by the parameter, obviously.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
body> If the parameter constant of the Poisson process is replaced with some non-negative integrable function of , the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant. <body><html>

#### Original toplevel document

Stochastic process - Wikipedia
sses. [49] The homogeneous Poisson process can be defined and generalized in different ways. It can be defined such that its index set is the real line, and this stochastic process is also called the stationary Poisson process. [102] [103] <span>If the parameter constant of the Poisson process is replaced with some non-negative integrable function of t {\displaystyle t} , the resulting process is called an inhomogeneous or nonhomogeneous Poisson process, where the average density of points of the process is no longer constant. [104] Serving as a fundamental process in queueing theory, the Poisson process is an important process for mathematical models, where it finds applications for models of events randoml

#### Flashcard 1736175914252

Tags
#matrix
Question
a square matrix A is called diagonalizable if there exists an invertible matrix P such that [...] is a diagonal matrix.
P−1AP

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1 AP is a diagonal matrix.

#### Original toplevel document

Diagonalizable matrix - Wikipedia

#### Annotation 1738538093836

 #multivariate-normal-distribution If the covariance matrix is not full rank, then the multivariate normal distribution is degenerate and does not have a density.

Multivariate normal distribution - Wikipedia
operatorname {sgn}(\rho ){\frac {\sigma _{Y}}{\sigma _{X}}}(x-\mu _{X})+\mu _{Y}.} This is because this expression, with sgn(ρ) replaced by ρ, is the best linear unbiased prediction of Y given a value of X. [4] Degenerate case <span>If the covariance matrix Σ {\displaystyle {\boldsymbol {\Sigma }}} is not full rank, then the multivariate normal distribution is degenerate and does not have a density. More precisely, it does not have a density with respect to k-dimensional Lebesgue measure (which is the usual measure assumed in calculus-level probability courses). Only random vectors

#### Flashcard 1738588949772

Tags
#multivariate-normal-distribution
Question
If [...], then the multivariate normal distribution is degenerate and does not have a density.
the covariance matrix is not full rank

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
If the covariance matrix is not full rank, then the multivariate normal distribution is degenerate and does not have a density.

#### Original toplevel document

Multivariate normal distribution - Wikipedia
operatorname {sgn}(\rho ){\frac {\sigma _{Y}}{\sigma _{X}}}(x-\mu _{X})+\mu _{Y}.} This is because this expression, with sgn(ρ) replaced by ρ, is the best linear unbiased prediction of Y given a value of X. [4] Degenerate case <span>If the covariance matrix Σ {\displaystyle {\boldsymbol {\Sigma }}} is not full rank, then the multivariate normal distribution is degenerate and does not have a density. More precisely, it does not have a density with respect to k-dimensional Lebesgue measure (which is the usual measure assumed in calculus-level probability courses). Only random vectors

#### Annotation 1738844278028

 #metric-space In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find those fixed points.

Banach fixed-point theorem - Wikipedia
Banach fixed-point theorem - Wikipedia Banach fixed-point theorem From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contraction mapping principle) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find those fixed points. The theorem is named after Stefan Banach (1892–1945), and was first stated by him in 1922. [1] Contents [hide] 1 Statement 2 Proofs 2.1 Banach's original proof 2.2 Shorter

#### Flashcard 1738846375180

Tags
#metric-space
Question
the Banach fixed-point theorem guarantees [...] of certain self-maps of metric spaces, and provides a constructive method to find those fixed points.
the existence and uniqueness of fixed points

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
d><head> In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contraction mapping principle) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find those fixed points. <html>

#### Original toplevel document

Banach fixed-point theorem - Wikipedia
Banach fixed-point theorem - Wikipedia Banach fixed-point theorem From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contraction mapping principle) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find those fixed points. The theorem is named after Stefan Banach (1892–1945), and was first stated by him in 1922. [1] Contents [hide] 1 Statement 2 Proofs 2.1 Banach's original proof 2.2 Shorter

#### Flashcard 1738856074508

Tags
#metric-space
Question
the Banach fixed-point theorem is also known as [...]
the contraction mapping theorem

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive metho

#### Original toplevel document

Banach fixed-point theorem - Wikipedia
Banach fixed-point theorem - Wikipedia Banach fixed-point theorem From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contraction mapping principle) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find those fixed points. The theorem is named after Stefan Banach (1892–1945), and was first stated by him in 1922. [1] Contents [hide] 1 Statement 2 Proofs 2.1 Banach's original proof 2.2 Shorter

#### Annotation 1738858695948

 #metric-space Definition. Let (X, d) be a metric space. Then a map T : X → X is called a contraction mapping on X if there exists q ∈ [0, 1) such that for all x, y in X.

Banach fixed-point theorem - Wikipedia
was first stated by him in 1922. [1] Contents [hide] 1 Statement 2 Proofs 2.1 Banach's original proof 2.2 Shorter proof 3 Applications 4 Converses 5 Generalizations 6 See also 7 Notes 8 References Statement[edit source] <span>Definition. Let (X, d) be a metric space. Then a map T : X → X is called a contraction mapping on X if there exists q ∈ [0, 1) such that d ( T ( x ) , T ( y ) ) ≤ q d ( x , y ) {\displaystyle d(T(x),T(y))\leq qd(x,y)} for all x, y in X. Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furth

#### Annotation 1738860268812

 #metric-space Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furthermore, x* can be found as follows: start with an arbitrary element x0 in X and define a sequence {xn} by xn = T(xn−1), then xn → x* .

Banach fixed-point theorem - Wikipedia
x ) , T ( y ) ) ≤ q d ( x , y ) {\displaystyle d(T(x),T(y))\leq qd(x,y)} for all x, y in X. <span>Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furthermore, x* can be found as follows: start with an arbitrary element x 0 in X and define a sequence {x n } by x n = T(x n−1 ), then x n → x*. Remark 1. The following inequalities are equivalent and describe the speed of convergence: d

#### Flashcard 1738862628108

Tags
#metric-space
Question

Definition. Let (X, d) be a metric space. Then a map T : XX is called a [...] on X if there exists q ∈ [0, 1) such that for all x, y in X.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Definition . Let (X, d) be a metric space. Then a map T : X → X is called a contraction mapping on X if there exists q ∈ [0, 1) such that for all x, y in X.

#### Original toplevel document

Banach fixed-point theorem - Wikipedia
was first stated by him in 1922. [1] Contents [hide] 1 Statement 2 Proofs 2.1 Banach's original proof 2.2 Shorter proof 3 Applications 4 Converses 5 Generalizations 6 See also 7 Notes 8 References Statement[edit source] <span>Definition. Let (X, d) be a metric space. Then a map T : X → X is called a contraction mapping on X if there exists q ∈ [0, 1) such that d ( T ( x ) , T ( y ) ) ≤ q d ( x , y ) {\displaystyle d(T(x),T(y))\leq qd(x,y)} for all x, y in X. Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furth

#### Flashcard 1738864200972

Tags
#metric-space
Question

Definition. Let (X, d) be a metric space. Then a map T : XX is called a contraction mapping on X if there exists q ∈ [0, 1) such that [...] for all x, y in X.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Definition . Let (X, d) be a metric space. Then a map T : X → X is called a contraction mapping on X if there exists q ∈ [0, 1) such that for all x, y in X.

#### Original toplevel document

Banach fixed-point theorem - Wikipedia
was first stated by him in 1922. [1] Contents [hide] 1 Statement 2 Proofs 2.1 Banach's original proof 2.2 Shorter proof 3 Applications 4 Converses 5 Generalizations 6 See also 7 Notes 8 References Statement[edit source] <span>Definition. Let (X, d) be a metric space. Then a map T : X → X is called a contraction mapping on X if there exists q ∈ [0, 1) such that d ( T ( x ) , T ( y ) ) ≤ q d ( x , y ) {\displaystyle d(T(x),T(y))\leq qd(x,y)} for all x, y in X. Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furth

#### Flashcard 1738867084556

Tags
#metric-space
Question
Let (X, d) be a non-empty complete metric space with a contraction mapping T : XX. Then T admits [...]
a unique fixed-point x* in X

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furthermore, x* can be found as follows: start with an arbitrary element x 0 in X and define a sequence {x n } by x n = T(x n−1 ), then x n → x* . <

#### Original toplevel document

Banach fixed-point theorem - Wikipedia
x ) , T ( y ) ) ≤ q d ( x , y ) {\displaystyle d(T(x),T(y))\leq qd(x,y)} for all x, y in X. <span>Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furthermore, x* can be found as follows: start with an arbitrary element x 0 in X and define a sequence {x n } by x n = T(x n−1 ), then x n → x*. Remark 1. The following inequalities are equivalent and describe the speed of convergence: d

#### Flashcard 1738868919564

Tags
#metric-space
Question
When a complete metric space admits a contraction mapping T : XX. The fixed point for the map can be found as follows: start with [...] and define a sequence {xn} by xn = T(xn−1), then xnx* .
an arbitrary element x0 in X

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furthermore, x* can be found as follows: start with <span>an arbitrary element x 0 in X and define a sequence {x n } by x n = T(x n−1 ), then x n → x* . <span><body><html>

#### Original toplevel document

Banach fixed-point theorem - Wikipedia
x ) , T ( y ) ) ≤ q d ( x , y ) {\displaystyle d(T(x),T(y))\leq qd(x,y)} for all x, y in X. <span>Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furthermore, x* can be found as follows: start with an arbitrary element x 0 in X and define a sequence {x n } by x n = T(x n−1 ), then x n → x*. Remark 1. The following inequalities are equivalent and describe the speed of convergence: d

#### Flashcard 1738870492428

Tags
#metric-space
Question
Using the Banach Fixed Point Theorem the fixed point x* can be found by starting with an arbitrary element x0 in X and define a sequence {xn} by [...], then xnx* .
xn = T(xn−1)

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furthermore, x* can be found as follows: start with an arbitrary element x 0 in X and define a sequence {x n } by <span>x n = T(x n−1 ), then x n → x* . <span><body><html>

#### Original toplevel document

Banach fixed-point theorem - Wikipedia
x ) , T ( y ) ) ≤ q d ( x , y ) {\displaystyle d(T(x),T(y))\leq qd(x,y)} for all x, y in X. <span>Banach Fixed Point Theorem. Let (X, d) be a non-empty complete metric space with a contraction mapping T : X → X. Then T admits a unique fixed-point x* in X (i.e. T(x*) = x*). Furthermore, x* can be found as follows: start with an arbitrary element x 0 in X and define a sequence {x n } by x n = T(x n−1 ), then x n → x*. Remark 1. The following inequalities are equivalent and describe the speed of convergence: d

#### Annotation 1739411819788

 #kalman-filter The underlying model of Kalman filter is similar to a hidden Markov model except that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions.

Kalman filter - Wikipedia
stimate in the special case that all errors are Gaussian-distributed. Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. <span>The underlying model is similar to a hidden Markov model except that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Contents [hide] 1 History 2 Overview of the calculation 3 Example application 4 Technical description and context 5 Underlying dynamical system model 6 Details 6.1 Predict

#### Flashcard 1739414965516

Tags
#kalman-filter
Question
In Kalman filter [...] variables have Gaussian distributions.
all latent and observed

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The underlying model of Kalman filter is similar to a hidden Markov model except that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions.

#### Original toplevel document

Kalman filter - Wikipedia
stimate in the special case that all errors are Gaussian-distributed. Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. <span>The underlying model is similar to a hidden Markov model except that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Contents [hide] 1 History 2 Overview of the calculation 3 Example application 4 Technical description and context 5 Underlying dynamical system model 6 Details 6.1 Predict

#### Annotation 1741781339404

 #dot-product In mathematics, the dot product or scalar product[note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number.

Dot product - Wikipedia
ia Jump to: navigation, search "Scalar product" redirects here. For the abstract scalar product, see Inner product space. For the product of a vector and a scalar, see Scalar multiplication. <span>In mathematics, the dot product or scalar product [note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used and often called inner product (or rarely projection product); see also inner product s

#### Annotation 1741785533708

 #dot-product if a and b are orthogonal, then the angle between them is 90° and

Dot product - Wikipedia
‖ cos ⁡ ( θ ) , {\displaystyle \mathbf {a} \cdot \mathbf {b} =\|\mathbf {a} \|\ \|\mathbf {b} \|\cos(\theta ),} where θ is the angle between a and b. In particular, <span>if a and b are orthogonal, then the angle between them is 90° and a ⋅ b = 0. {\displaystyle \mathbf {a} \cdot \mathbf {b} =0.} At the other extreme, if they are codirectional, then the angle between them is 0° and a ⋅ b

#### Annotation 1743937998092

 #logic The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.

Mathematical logic - Wikipedia
f mathematics). Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. [1] <span>The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems. Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic, particularly first-order

#### Flashcard 1744152694028

Tags
#lebesgue-integration
Question
Riemann integration does not interact well with [...]
taking limits of sequences of functions

Think the Cantor devil's staircase. Integration is essentials the limit of sums.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
However, Riemann integration does not interact well with taking limits of sequences of functions

#### Original toplevel document

Lebesgue integration - Wikipedia
e of easily calculated areas that converge to the integral of a given function. This definition is successful in the sense that it gives the expected answer for many already-solved problems, and gives useful results for many other problems. <span>However, Riemann integration does not interact well with taking limits of sequences of functions, making such limiting processes difficult to analyze. This is important, for instance, in the study of Fourier series, Fourier transforms, and other topics. The Lebesgue integral is bet

#### Flashcard 1744165539084

Tags
#lebesgue-integration
Question
the Banach-Tarski paradox suggests that picking out [...] is an essential prerequisite.
a suitable class of measurable subsets

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
er set theory developments showed (see non-measurable set), it is actually impossible to assign a length to all subsets of ℝ in a way that preserves some natural additivity and translation invariance properties. This suggests that picking out <span>a suitable class of measurable subsets is an essential prerequisite. <span><body><html>

#### Original toplevel document

Lebesgue integration - Wikipedia
a useful abstraction of the notion of length of subsets of the real line—and, more generally, area and volume of subsets of Euclidean spaces. In particular, it provided a systematic answer to the question of which subsets of ℝ have a length. <span>As later set theory developments showed (see non-measurable set), it is actually impossible to assign a length to all subsets of ℝ in a way that preserves some natural additivity and translation invariance properties. This suggests that picking out a suitable class of measurable subsets is an essential prerequisite. The Riemann integral uses the notion of length explicitly. Indeed, the element of calculation for the Riemann integral is the rectangle [a, b] × [c, d], whose area is calculated to be

#### Flashcard 1744250473740

Tags
#dot-product
Question
[...] takes two equal-length sequences of numbers and returns a single number.
dot product

or scalar product

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In mathematics, the dot product or scalar product [note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number.

#### Original toplevel document

Dot product - Wikipedia
ia Jump to: navigation, search "Scalar product" redirects here. For the abstract scalar product, see Inner product space. For the product of a vector and a scalar, see Scalar multiplication. <span>In mathematics, the dot product or scalar product [note 1] is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used and often called inner product (or rarely projection product); see also inner product s

#### Flashcard 1744253357324

Tags
#dot-product
Question

if a and b are [...], then the angle between them is 90° and

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
if a and b are orthogonal, then the angle between them is 90° and

#### Original toplevel document

Dot product - Wikipedia
‖ cos ⁡ ( θ ) , {\displaystyle \mathbf {a} \cdot \mathbf {b} =\|\mathbf {a} \|\ \|\mathbf {b} \|\cos(\theta ),} where θ is the angle between a and b. In particular, <span>if a and b are orthogonal, then the angle between them is 90° and a ⋅ b = 0. {\displaystyle \mathbf {a} \cdot \mathbf {b} =0.} At the other extreme, if they are codirectional, then the angle between them is 0° and a ⋅ b

#### Flashcard 1744268561676

Tags
#calculus-of-variations
Question
In Calculus of variations functionals are mappings from [...] to the real numbers.
a set of functions

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
ead><head> Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals, which are mappings from a set of functions to the real numbers. elementary calculus is about infinitesimally small changes in the values of functions without changes in the function itself, calculus of variations is about infinitesimally small

#### Original toplevel document

Calculus of variations - Wikipedia
l Line integral Surface integral Volume integral Jacobian Hessian matrix Specialized[hide] Fractional Malliavin Stochastic Variations Glossary of calculus[show] Glossary of calculus v t e <span>Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals, which are mappings from a set of functions to the real numbers. [Note 1] Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–L

#### Flashcard 1744276163852

Tags
#logic
Question
The unifying themes in mathematical logic include the study of [...] and the deductive power of formal proof systems.
the expressive power of formal systems

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.

#### Original toplevel document

Mathematical logic - Wikipedia
f mathematics). Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. [1] <span>The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems. Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic, particularly first-order

#### Flashcard 1744277736716

Tags
#logic
Question
The unifying themes in mathematical logic include the study of the expressive power of formal systems and [...].
the deductive power of formal proof systems

How does it relate to the inductive process of Bayesian reasoning?

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.

#### Original toplevel document

Mathematical logic - Wikipedia
f mathematics). Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. [1] <span>The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems. Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic, particularly first-order

#### Flashcard 1744279309580

Tags
#logic
Question
The unifying themes in [...] include the study of the expressive power of formal systems and the deductive power of formal proof systems.
mathematical logic

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.

#### Original toplevel document

Mathematical logic - Wikipedia
f mathematics). Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. It bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science. [1] <span>The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems. Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic, particularly first-order

#### Flashcard 1744289533196

Tags
#lebesgue-integration
Question

a measurable simple function is [...description...]

a finite linear combination of indicator functions

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A finite linear combination of indicator functions where the coefficients a k are real numbers and the sets S k are measurable, is called a measurable simple function.

#### Original toplevel document

Lebesgue integration - Wikipedia
μ = μ ( S ) . {\displaystyle \int 1_{S}\,\mathrm {d} \mu =\mu (S).} Notice that the result may be equal to +∞, unless μ is a finite measure. Simple functions: <span>A finite linear combination of indicator functions ∑ k a k 1 S k {\displaystyle \sum _{k}a_{k}1_{S_{k}}} where the coefficients a k are real numbers and the sets S k are measurable, is called a measurable simple function. We extend the integral by linearity to non-negative measurable simple functions. When the coefficients a k are non-negative, we set ∫ (

#### Flashcard 1744294251788

Tags
#lebesgue-integration
Question

To assign a value to [...], the only reasonable choice is to set:

the integral of the indicator function 1S of a measurable set S consistent with the given measure μ

Notice that the result may be equal to +∞ , unless μ is a finite measure.
Trick: just read the expression from left to right

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
To assign a value to the integral of the indicator function 1 S of a measurable set S consistent with the given measure μ, the only reasonable choice is to set: Notice that the result may be equal to +∞ , unless μ is a finite measure.

#### Original toplevel document

Lebesgue integration - Wikipedia
x ) {\displaystyle \int _{E}f\,\mathrm {d} \mu =\int _{E}f\left(x\right)\,\mathrm {d} \mu \left(x\right)} for measurable real-valued functions f defined on E in stages: Indicator functions: <span>To assign a value to the integral of the indicator function 1 S of a measurable set S consistent with the given measure μ, the only reasonable choice is to set: ∫ 1 S d μ = μ ( S ) . {\displaystyle \int 1_{S}\,\mathrm {d} \mu =\mu (S).} Notice that the result may be equal to +∞, unless μ is a finite measure. Simple functions: A finite linear combination of indicator functions ∑ k a

#### Flashcard 1744295824652

Tags
#lebesgue-integration
Question

To assign a value to the integral of the indicator function 1S of a measurable set S consistent with the given measure μ, the only reasonable choice is to set [...]

Notice that the result may be equal to +∞ , unless μ is a finite measure.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
To assign a value to the integral of the indicator function 1 S of a measurable set S consistent with the given measure μ, the only reasonable choice is to set: Notice that the result may be equal to +∞ , unless μ is a finite measure.

#### Original toplevel document

Lebesgue integration - Wikipedia
x ) {\displaystyle \int _{E}f\,\mathrm {d} \mu =\int _{E}f\left(x\right)\,\mathrm {d} \mu \left(x\right)} for measurable real-valued functions f defined on E in stages: Indicator functions: <span>To assign a value to the integral of the indicator function 1 S of a measurable set S consistent with the given measure μ, the only reasonable choice is to set: ∫ 1 S d μ = μ ( S ) . {\displaystyle \int 1_{S}\,\mathrm {d} \mu =\mu (S).} Notice that the result may be equal to +∞, unless μ is a finite measure. Simple functions: A finite linear combination of indicator functions ∑ k a

#### Annotation 1748729466124

 #topology a topological space may be defined as a set of points, along witha set of neighbourhoods for each point, satisfyinga set of axioms relating points and neighbourhoods.

Topological space - Wikipedia
n>Topological space - Wikipedia Topological space From Wikipedia, the free encyclopedia Jump to: navigation, search In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, c

#### Annotation 1748731301132

 #topology The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence.

Topological space - Wikipedia
, search In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. <span>The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence. [1] Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a centra

#### Flashcard 1748733660428

Tags
#topology
Question
a [...] may be defined by a set of of points, neighbourhoods, and axioms.
topological space

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods.

#### Original toplevel document

Topological space - Wikipedia
n>Topological space - Wikipedia Topological space From Wikipedia, the free encyclopedia Jump to: navigation, search In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, c

#### Annotation 1748751224076

 #probability-theory A random variable is defined as a function that maps outcomes to numerical quantities (labels)

Random variable - Wikipedia
In the case of the coin, there are only two possible outcomes, namely heads or tails. Since one of these outcomes must occur, either the event that the coin lands heads or the event that the coin lands tails must have non-zero probability. <span>A random variable is defined as a function that maps outcomes to numerical quantities (labels), typically real numbers. In this sense, it is a procedure for assigning a numerical quantity to each physical outcome, and, contrary to its name, this procedure itself is neither random

#### Flashcard 1748753059084

Tags
#probability-theory
Question
A [...] is defined as a function that maps outcomes to numerical quantities (labels)
random variable

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A random variable is defined as a function that maps outcomes to numerical quantities (labels)

#### Original toplevel document

Random variable - Wikipedia
In the case of the coin, there are only two possible outcomes, namely heads or tails. Since one of these outcomes must occur, either the event that the coin lands heads or the event that the coin lands tails must have non-zero probability. <span>A random variable is defined as a function that maps outcomes to numerical quantities (labels), typically real numbers. In this sense, it is a procedure for assigning a numerical quantity to each physical outcome, and, contrary to its name, this procedure itself is neither random

#### Flashcard 1748754631948

Tags
#probability-theory
Question
A random variable is defined as a function that maps outcomes to [...]
numerical quantities (labels)

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A random variable is defined as a function that maps outcomes to numerical quantities (labels)

#### Original toplevel document

Random variable - Wikipedia
In the case of the coin, there are only two possible outcomes, namely heads or tails. Since one of these outcomes must occur, either the event that the coin lands heads or the event that the coin lands tails must have non-zero probability. <span>A random variable is defined as a function that maps outcomes to numerical quantities (labels), typically real numbers. In this sense, it is a procedure for assigning a numerical quantity to each physical outcome, and, contrary to its name, this procedure itself is neither random

#### Flashcard 1748776389900

Tags
#measure-theory
Question
[...] means that the support of the measure forms a compact set
the measure has compact support

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
You often see written “the measure has compact support” to note that the support of the measure forms a compact (=closed and bounded) set

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1749100399884

Tags
#topology
Question
the structure of an inner product allows [...] to be measured.
length and angle

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured.

#### Original toplevel document

Hilbert space - Wikipedia
e state of a vibrating string can be modeled as a point in a Hilbert space. The decomposition of a vibrating string into its vibrations in distinct overtones is given by the projection of the point onto the coordinate axes in the space. <span>The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces. The earliest Hilbert spaces were studied from this point o

#### Flashcard 1749109574924

Tags
#hilbert-space
Question
Hilbert spaces are complete: there are [...] to allow the techniques of calculus to be used.
enough limits in the space

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used.

#### Original toplevel document

Hilbert space - Wikipedia
e state of a vibrating string can be modeled as a point in a Hilbert space. The decomposition of a vibrating string into its vibrations in distinct overtones is given by the projection of the point onto the coordinate axes in the space. <span>The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces. The earliest Hilbert spaces were studied from this point o

#### Annotation 1753267178764

 #incremental-reading The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading.

ng and 10% of your time on adding most important findings to SuperMemo, your reading speed will actually decline only by some 10%, while the retention of the most important pieces will be as high as programmed in SuperMemo (up to 99%). <span>The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading. It is worth noting that the learning speed limit in high-retention learning is imposed by your memory. If one-book-per-year sounds like a major disappointment, the roots of this lay

#### Annotation 1753270062348

 #incremental-reading The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention.

#### Parent (intermediate) annotation

Open it
The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditio

#### Original toplevel document

ng and 10% of your time on adding most important findings to SuperMemo, your reading speed will actually decline only by some 10%, while the retention of the most important pieces will be as high as programmed in SuperMemo (up to 99%). <span>The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading. It is worth noting that the learning speed limit in high-retention learning is imposed by your memory. If one-book-per-year sounds like a major disappointment, the roots of this lay

#### Annotation 1753271635212

 #incremental-reading With incremental reading, you ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable to traditional book reading.

#### Parent (intermediate) annotation

Open it
The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading.

#### Original toplevel document

ng and 10% of your time on adding most important findings to SuperMemo, your reading speed will actually decline only by some 10%, while the retention of the most important pieces will be as high as programmed in SuperMemo (up to 99%). <span>The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading. It is worth noting that the learning speed limit in high-retention learning is imposed by your memory. If one-book-per-year sounds like a major disappointment, the roots of this lay

#### Flashcard 1753273208076

Tags
Question
incremental reading helps balance [...] and [...]
speed and retention

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention.

#### Original toplevel document

ng and 10% of your time on adding most important findings to SuperMemo, your reading speed will actually decline only by some 10%, while the retention of the most important pieces will be as high as programmed in SuperMemo (up to 99%). <span>The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading. It is worth noting that the learning speed limit in high-retention learning is imposed by your memory. If one-book-per-year sounds like a major disappointment, the roots of this lay

#### Flashcard 1753276353804

Tags
Question
With incremental reading, you ensure high-retention of [...]
the most important pieces of text

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
With incremental reading, you ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable to typical of traditional book reading.

#### Original toplevel document

ng and 10% of your time on adding most important findings to SuperMemo, your reading speed will actually decline only by some 10%, while the retention of the most important pieces will be as high as programmed in SuperMemo (up to 99%). <span>The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading. It is worth noting that the learning speed limit in high-retention learning is imposed by your memory. If one-book-per-year sounds like a major disappointment, the roots of this lay

#### Flashcard 1753278713100

Tags
Question
With incremental reading, the majority of time should still be spent on reading at [...].
normal speed

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
With incremental reading, you ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable to traditional book reading.

#### Original toplevel document

ng and 10% of your time on adding most important findings to SuperMemo, your reading speed will actually decline only by some 10%, while the retention of the most important pieces will be as high as programmed in SuperMemo (up to 99%). <span>The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading. It is worth noting that the learning speed limit in high-retention learning is imposed by your memory. If one-book-per-year sounds like a major disappointment, the roots of this lay

#### Flashcard 1753280285964

Tags
Question
a large proportion

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
With incremental reading, you ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable to traditional book reading.

#### Original toplevel document

ng and 10% of your time on adding most important findings to SuperMemo, your reading speed will actually decline only by some 10%, while the retention of the most important pieces will be as high as programmed in SuperMemo (up to 99%). <span>The concept of incremental reading introduced in SuperMemo 2000 provides you with a precise tool for finding the optimum balance between speed and retention. You will ensure high-retention of the most important pieces of text, while a large proportion of time will be spent reading at speeds comparable or higher than those typical of traditional book reading. It is worth noting that the learning speed limit in high-retention learning is imposed by your memory. If one-book-per-year sounds like a major disappointment, the roots of this lay

#### Annotation 1753281858828

 #calculus Euler's formula establishes the fundamental relationship between the trigonometric functions and the complex exponential function.

#### Parent (intermediate) annotation

Open it
Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imagina

#### Original toplevel document

Euler's formula - Wikipedia
s formula half-lives exponential growth and decay Defining e proof that e is irrational representations of e Lindemann–Weierstrass theorem People John Napier Leonhard Euler Related topics Schanuel's conjecture v t e <span>Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. This complex exponential function is sometimes denoted cis x ("cosine plus i sine"). The formula is still valid if x is a complex number, and so some authors refer to the more

#### Annotation 1753283431692

 #calculus Euler's formula states that with the argument x given in radians.

#### Parent (intermediate) annotation

Open it
span> Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. <span><body><html>

#### Original toplevel document

Euler's formula - Wikipedia
s formula half-lives exponential growth and decay Defining e proof that e is irrational representations of e Lindemann–Weierstrass theorem People John Napier Leonhard Euler Related topics Schanuel's conjecture v t e <span>Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. This complex exponential function is sometimes denoted cis x ("cosine plus i sine"). The formula is still valid if x is a complex number, and so some authors refer to the more

#### Flashcard 1753285790988

Tags
#calculus
Question
Euler's formula establishes the fundamental relationship between the trigonometric functions and [...]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Euler's formula establishes the fundamental relationship between the trigonometric functions and the complex exponential function.

#### Original toplevel document

Euler's formula - Wikipedia
s formula half-lives exponential growth and decay Defining e proof that e is irrational representations of e Lindemann–Weierstrass theorem People John Napier Leonhard Euler Related topics Schanuel's conjecture v t e <span>Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. This complex exponential function is sometimes denoted cis x ("cosine plus i sine"). The formula is still valid if x is a complex number, and so some authors refer to the more

#### Flashcard 1753288150284

Tags
#calculus
Question

Euler's formula states that [...], with the argument x given in radians

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Euler's formula states that with the argument x given in radians.

#### Original toplevel document

Euler's formula - Wikipedia
s formula half-lives exponential growth and decay Defining e proof that e is irrational representations of e Lindemann–Weierstrass theorem People John Napier Leonhard Euler Related topics Schanuel's conjecture v t e <span>Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. This complex exponential function is sometimes denoted cis x ("cosine plus i sine"). The formula is still valid if x is a complex number, and so some authors refer to the more

#### Annotation 1753323539724

 #topology The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined.

Open set - Wikipedia
set can be open (called the discrete topology), or no set can be open but the space itself and the empty set (the indiscrete topology). In practice, however, open sets are usually chosen to be similar to the open intervals of the real line. <span>The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, which use notions of nearness, can be defined using these open sets. Each choice of o

#### Flashcard 1753325899020

Tags
#topology
Question
The notion of an [...] provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined.
open set

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined.

#### Original toplevel document

Open set - Wikipedia
set can be open (called the discrete topology), or no set can be open but the space itself and the empty set (the indiscrete topology). In practice, however, open sets are usually chosen to be similar to the open intervals of the real line. <span>The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, which use notions of nearness, can be defined using these open sets. Each choice of o

#### Flashcard 1753327471884

Tags
#topology
Question
The notion of an open set provides a fundamental way to speak of [...] in a topological space, without explicitly having a concept of distance defined.
nearness of points

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined.

#### Original toplevel document

Open set - Wikipedia
set can be open (called the discrete topology), or no set can be open but the space itself and the empty set (the indiscrete topology). In practice, however, open sets are usually chosen to be similar to the open intervals of the real line. <span>The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, which use notions of nearness, can be defined using these open sets. Each choice of o

#### Flashcard 1753329044748

Tags
#topology
Question
The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of [...] defined.
distance

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined.

#### Original toplevel document

Open set - Wikipedia
set can be open (called the discrete topology), or no set can be open but the space itself and the empty set (the indiscrete topology). In practice, however, open sets are usually chosen to be similar to the open intervals of the real line. <span>The notion of an open set provides a fundamental way to speak of nearness of points in a topological space, without explicitly having a concept of distance defined. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, which use notions of nearness, can be defined using these open sets. Each choice of o

#### Annotation 1753334549772

 #topology The definition of a topological space relies only upon set theory

#### Parent (intermediate) annotation

Open it
The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence.

#### Original toplevel document

Topological space - Wikipedia
, search In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. <span>The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence. [1] Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a centra

#### Flashcard 1753336122636

Tags
#topology
Question
The definition of a topological space relies only upon [...]
set theory

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The definition of a topological space relies only upon set theory

#### Original toplevel document

Topological space - Wikipedia
, search In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. <span>The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence. [1] Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a centra

#### Flashcard 1754599394572

Tags
#topology
Question
[...] is the most general notion of a mathematical space
topological space

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence.

#### Original toplevel document

Topological space - Wikipedia
, search In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. <span>The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence. [1] Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a centra

#### Flashcard 1754601753868

Tags
#topology
Question
topological space allows for the definition of concepts such as [...CCC...] .

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence.

#### Original toplevel document

Topological space - Wikipedia
, search In topology and related branches of mathematics, a topological space may be defined as a set of points, along with a set of neighbourhoods for each point, satisfying a set of axioms relating points and neighbourhoods. <span>The definition of a topological space relies only upon set theory and is the most general notion of a mathematical space that allows for the definition of concepts such as continuity, connectedness, and convergence. [1] Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. Being so general, topological spaces are a centra

#### Annotation 1754604113164

 #topology Each choice of open sets for a space is called a topology.

Open set - Wikipedia
pological space, without explicitly having a concept of distance defined. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, which use notions of nearness, can be defined using these open sets. <span>Each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of central importance in point-set topology, they are also used as an organizational tool in other important branches of mat

#### Annotation 1754607258892

 #topology In particular, sets of the form (-ε, ε) give us a lot of information about points close to x = 0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x.

Open set - Wikipedia
ese points approximate x to a greater degree of accuracy compared to when ε = 1. The previous discussion shows, for the case x = 0, that one may approximate x to higher and higher degrees of accuracy by defining ε to be smaller and smaller. <span>In particular, sets of the form (-ε, ε) give us a lot of information about points close to x = 0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x. This innovative idea has far-reaching consequences; in particular, by defining different collections of sets containing 0 (distinct from the sets (-ε, ε)), one may find different result

#### Annotation 1754608831756

 #topology In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis; a member of this neighborhood basis is referred to as an open set.

Open set - Wikipedia
e find that in some sense, every real number is distance 0 away from 0. It may help in this case to think of the measure as being a binary condition, all things in R are equally close to 0, while any item that is not in R is not close to 0. <span>In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis; a member of this neighborhood basis is referred to as an open set. In fact, one may generalize these notions to an arbitrary set (X); rather than just the real numbers. In this case, given a point (x) of that set, one may define a collection of sets &q

#### Annotation 1754610404620

 #topology When difining nearness between points with open balls, the measure of distance becomes a binary condition

Open set - Wikipedia
ot;measuring distance", all points are close to 0 since there is only one possible degree of accuracy one may achieve in approximating 0: being a member of R. Thus, we find that in some sense, every real number is distance 0 away from 0. <span>It may help in this case to think of the measure as being a binary condition, all things in R are equally close to 0, while any item that is not in R is not close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neig

#### Flashcard 1754613550348

Tags
#topology
Question
When difining nearness between points with open balls, the measure of distance becomes a [...]
binary condition

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
When difining nearness between points with open balls, the measure of distance becomes a binary condition

#### Original toplevel document

Open set - Wikipedia
ot;measuring distance", all points are close to 0 since there is only one possible degree of accuracy one may achieve in approximating 0: being a member of R. Thus, we find that in some sense, every real number is distance 0 away from 0. <span>It may help in this case to think of the measure as being a binary condition, all things in R are equally close to 0, while any item that is not in R is not close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neig

#### Flashcard 1754615123212

Tags
#topology
Question
When difining [...] with open balls, the measure of distance becomes a binary condition
nearness between points

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
When difining nearness between points with open balls, the measure of distance becomes a binary condition

#### Original toplevel document

Open set - Wikipedia
ot;measuring distance", all points are close to 0 since there is only one possible degree of accuracy one may achieve in approximating 0: being a member of R. Thus, we find that in some sense, every real number is distance 0 away from 0. <span>It may help in this case to think of the measure as being a binary condition, all things in R are equally close to 0, while any item that is not in R is not close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neig

#### Annotation 1754616696076

 #topology Sets that can be constructed as the intersection of countably many open sets are denoted Gδ sets.

Open set - Wikipedia
u } the open sets. Note that infinite intersections of open sets need not be open. For example, the intersection of all intervals of the form (−1/n, 1/n), where n is a positive integer, is the set {0} which is not open in the real line. <span>Sets that can be constructed as the intersection of countably many open sets are denoted G δ sets. The topological definition of open sets generalizes the metric space definition: If one begins with a metric space and defines open sets as before, then the family of all open sets is

#### Annotation 1754618268940

 #topology infinite intersections of open sets need not be open.

Open set - Wikipedia
τ {\displaystyle \tau } is in τ {\displaystyle \tau } ) We call the sets in τ {\displaystyle \tau } the open sets. Note that <span>infinite intersections of open sets need not be open. For example, the intersection of all intervals of the form (−1/n, 1/n), where n is a positive integer, is the set {0} which is not open in the real line. Sets that can be constructed as

#### Flashcard 1754619841804

Tags
#topology
Question
[...] of open sets need not be open.
infinite intersections

The axiom of sigma-algebra uses infinite (countable) intersections.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
infinite intersections of open sets need not be open.

#### Original toplevel document

Open set - Wikipedia
τ {\displaystyle \tau } is in τ {\displaystyle \tau } ) We call the sets in τ {\displaystyle \tau } the open sets. Note that <span>infinite intersections of open sets need not be open. For example, the intersection of all intervals of the form (−1/n, 1/n), where n is a positive integer, is the set {0} which is not open in the real line. Sets that can be constructed as

#### Flashcard 1754629278988

Tags
#topology
Question
Each choice of open sets for a space is called a [...].
topology

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Each choice of open sets for a space is called a topology.

#### Original toplevel document

Open set - Wikipedia
pological space, without explicitly having a concept of distance defined. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, which use notions of nearness, can be defined using these open sets. <span>Each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of central importance in point-set topology, they are also used as an organizational tool in other important branches of mat

#### Flashcard 1754630851852

Tags
#topology
Question
Each [...] for a space is called a topology.
choice of open sets

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Each choice of open sets for a space is called a topology.

#### Original toplevel document

Open set - Wikipedia
pological space, without explicitly having a concept of distance defined. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, which use notions of nearness, can be defined using these open sets. <span>Each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of central importance in point-set topology, they are also used as an organizational tool in other important branches of mat

#### Annotation 1754632424716

 #topology A complement of an open set (relative to the space that the topology is defined on) is called a closed set.

Open set - Wikipedia
l space. There are, however, topological spaces that are not metric spaces. Properties The union of any number of open sets, or infinitely many open sets, is open. [2] The intersection of a finite number of open sets is open. [2] <span>A complement of an open set (relative to the space that the topology is defined on) is called a closed set. A set may be both open and closed (a clopen set). The empty set and the full space are examples of sets that are both open and closed. [3] Uses Open sets have a fundamental im

#### Flashcard 1755466304780

Tags
#topology
Question
[...] is called a closed set.
A complement of an open set

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A complement of an open set (relative to the space that the topology is defined on) is called a closed set.

#### Original toplevel document

Open set - Wikipedia
l space. There are, however, topological spaces that are not metric spaces. Properties The union of any number of open sets, or infinitely many open sets, is open. [2] The intersection of a finite number of open sets is open. [2] <span>A complement of an open set (relative to the space that the topology is defined on) is called a closed set. A set may be both open and closed (a clopen set). The empty set and the full space are examples of sets that are both open and closed. [3] Uses Open sets have a fundamental im

#### Flashcard 1755467877644

Tags
#topology
Question
A complement of an open set is called a [...].
closed set

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A complement of an open set (relative to the space that the topology is defined on) is called a closed set.

#### Original toplevel document

Open set - Wikipedia
l space. There are, however, topological spaces that are not metric spaces. Properties The union of any number of open sets, or infinitely many open sets, is open. [2] The intersection of a finite number of open sets is open. [2] <span>A complement of an open set (relative to the space that the topology is defined on) is called a closed set. A set may be both open and closed (a clopen set). The empty set and the full space are examples of sets that are both open and closed. [3] Uses Open sets have a fundamental im

#### Annotation 1755469450508

 #topology U is open if every point in U has a neighborhood contained in U.

Open set - Wikipedia
ntained in U. Metric spaces A subset U of a metric space (M, d) is called open if, given any point x in U, there exists a real number ε > 0 such that, given any point y in M with d(x, y) < ε, y also belongs to U. Equivalently, <span>U is open if every point in U has a neighborhood contained in U. This generalizes the Euclidean space example, since Euclidean space with the Euclidean distance is a metric space. Topological spaces In general topological spaces, the open

#### Flashcard 1755492257036

Tags
#optimization
Question
Generally, unless both [...] are convex there may be several local minima
the objective function and the feasible region

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Generally, unless both the objective function and the feasible region are convex in a minimization problem, there may be several local minima

#### Original toplevel document

Mathematical optimization - Wikipedia
energy functional. A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an optimal solution. In mathematics, conventional optimization problems are usually stated in terms of minimization. <span>Generally, unless both the objective function and the feasible region are convex in a minimization problem, there may be several local minima. A local minimum x* is defined as a point for which there exists some δ > 0 such that for all x where ‖ x −

#### Flashcard 1755496189196

Tags
#optimization
Question
most algorithms for solving nonconvex problems can't distinguish between [...]
local and global optima

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A large number of algorithms proposed for solving nonconvex problems—including the majority of commercially available solvers—are not capable of making a distinction between locally optimal solutions and globally optimal solutions

#### Original toplevel document

Mathematical optimization - Wikipedia
onvex problem, if there is a local minimum that is interior (not on the edge of the set of feasible points), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. <span>A large number of algorithms proposed for solving nonconvex problems—including the majority of commercially available solvers—are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the develo

#### Flashcard 1756368080140

Tags
#calculus
Question
[...] establishes the fundamental relationship between the trigonometric functions and the complex exponential function.
Euler's formula

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Euler's formula establishes the fundamental relationship between the trigonometric functions and the complex exponential function.

#### Original toplevel document

Euler's formula - Wikipedia
s formula half-lives exponential growth and decay Defining e proof that e is irrational representations of e Lindemann–Weierstrass theorem People John Napier Leonhard Euler Related topics Schanuel's conjecture v t e <span>Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that for any real number x e i x = cos ⁡ x + i sin ⁡ x , {\displaystyle e^{ix}=\cos x+i\sin x,} where e is the base of the natural logarithm, i is the imaginary unit, and cos and sin are the trigonometric functions cosine and sine respectively, with the argument x given in radians. This complex exponential function is sometimes denoted cis x ("cosine plus i sine"). The formula is still valid if x is a complex number, and so some authors refer to the more

#### Annotation 1758189980940

 #differential-equations ODEs that are linear differential equations have exact closed-form solutions that can be added and multiplied by coefficients.

Ordinary differential equation - Wikipedia
quation containing one or more functions of one independent variable and its derivatives. The term ordinary is used in contrast with the term partial differential equation which may be with respect to more than one independent variable. [1] <span>ODEs that are linear differential equations have exact closed-form solutions that can be added and multiplied by coefficients. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, e

#### Flashcard 1758192078092

Tags
#differential-equations
Question
ODEs that are linear differential equations have [...] that can be added and multiplied by coefficients.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
ODEs that are linear differential equations have exact closed-form solutions that can be added and multiplied by coefficients.

#### Original toplevel document

Ordinary differential equation - Wikipedia
quation containing one or more functions of one independent variable and its derivatives. The term ordinary is used in contrast with the term partial differential equation which may be with respect to more than one independent variable. [1] <span>ODEs that are linear differential equations have exact closed-form solutions that can be added and multiplied by coefficients. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, e

#### Flashcard 1758196534540

Tags
#differential-equations
Question
ODEs that are [...] have exact closed-form solutions that can be added and multiplied by coefficients.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
ODEs that are linear differential equations have exact closed-form solutions that can be added and multiplied by coefficients.

#### Original toplevel document

Ordinary differential equation - Wikipedia
quation containing one or more functions of one independent variable and its derivatives. The term ordinary is used in contrast with the term partial differential equation which may be with respect to more than one independent variable. [1] <span>ODEs that are linear differential equations have exact closed-form solutions that can be added and multiplied by coefficients. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, e

#### Annotation 1758198107404

 #differential-equations exact and analytic solutions of nonlinear ODEs are usually in series or integral form.

Ordinary differential equation - Wikipedia
olutions that can be added and multiplied by coefficients. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, <span>exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analyti

#### Flashcard 1758200466700

Tags
#differential-equations
Question
exact and analytic solutions of nonlinear ODEs are usually in [...] form.
series or integral

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
exact and analytic solutions of nonlinear ODEs are usually in series or integral form.

#### Original toplevel document

Ordinary differential equation - Wikipedia
olutions that can be added and multiplied by coefficients. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, <span>exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analyti

#### Flashcard 1758278847756

Tags
#calculus-of-variations
Question
the necessary condition of functional extremum is [...description...]
functional derivative equals zero.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
the necessary condition of extremum is functional derivative equal zero. the weak formulation of the necessary condition of extremum is an integral with an arbitrary function δf .

#### Original toplevel document

Fundamental lemma of calculus of variations - Wikipedia
pedia Jump to: navigation, search In mathematics, specifically in the calculus of variations, a variation δf of a function f can be concentrated on an arbitrarily small interval, but not a single point. <span>Accordingly, the necessary condition of extremum (functional derivative equal zero) appears in a weak formulation (variational form) integrated with an arbitrary function δf. The fundamental lemma of the calculus of variations is typically used to transform this weak formulation into the strong formulation (differential equation), free of the integration with arbitrary function. The proof usually exploits the possibility to choose δf concentrated on an interval on which f keeps sign (positive or negative). Several versions of the lemma are in use. Basic version

#### Annotation 1758466805004

 #topology Being so general, topological spaces are a central unifying notion and appear in virtually every branch of modern mathematics.

Topological space - Wikipedia
tical space that allows for the definition of concepts such as continuity, connectedness, and convergence. [1] Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. <span>Being so general, topological spaces are a central unifying notion and appear in virtually every branch of modern mathematics. The branch of mathematics that studies topological spaces in their own right is called point-set topology or general topology. Contents [hide] 1 History 2 Definition 2.1 De

#### Flashcard 1758469688588

Tags
#topology
Question
Being so general, [...] are a central unifying notion and appear in virtually every branch of modern mathematics.
topological spaces

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Being so general, topological spaces are a central unifying notion and appear in virtually every branch of modern mathematics.

#### Original toplevel document

Topological space - Wikipedia
tical space that allows for the definition of concepts such as continuity, connectedness, and convergence. [1] Other spaces, such as manifolds and metric spaces, are specializations of topological spaces with extra structures or constraints. <span>Being so general, topological spaces are a central unifying notion and appear in virtually every branch of modern mathematics. The branch of mathematics that studies topological spaces in their own right is called point-set topology or general topology. Contents [hide] 1 History 2 Definition 2.1 De

#### Annotation 1758472572172

 #topology In topology, a branch of mathematics, a topological manifold is a topological space (which may also be a separated space) which locally resembles real n-dimensional space in a sense defined below.

Topological manifold - Wikipedia
Topological manifold - Wikipedia Topological manifold From Wikipedia, the free encyclopedia Jump to: navigation, search In topology, a branch of mathematics, a topological manifold is a topological space (which may also be a separated space) which locally resembles real n-dimensional space in a sense defined below. Topological manifolds form an important class of topological spaces with applications throughout mathematics. A manifold can mean a topological manifold, or more frequently, a topolog

#### Flashcard 1758474931468

Tags
#topology
Question
a [...] is a topological space which locally resembles real n-dimensional space in some sense
topological manifold

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In topology, a branch of mathematics, a topological manifold is a topological space (which may also be a separated space) which locally resembles real n-dimensional space in a sense defined below.

#### Original toplevel document

Topological manifold - Wikipedia
Topological manifold - Wikipedia Topological manifold From Wikipedia, the free encyclopedia Jump to: navigation, search In topology, a branch of mathematics, a topological manifold is a topological space (which may also be a separated space) which locally resembles real n-dimensional space in a sense defined below. Topological manifolds form an important class of topological spaces with applications throughout mathematics. A manifold can mean a topological manifold, or more frequently, a topolog

#### Annotation 1758491184396

 #topology A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms:[7] The empty set and X itself belong to τ.Any (finite or infinite) union of members of τ still belongs to τ.The intersection of any finite number of members of τ still belongs to τ. The elements of τ are called open sets and the collection τ is called a topology on X.

Topological space - Wikipedia
three-point set {1,2,3}. The bottom-left example is not a topology because the union of {2} and {3} [i.e. {2,3}] is missing; the bottom-right example is not a topology because the intersection of {1,2} and {2,3} [i.e. {2}], is missing. <span>A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ. The elements of τ are called open sets and the collection τ is called a topology on X. Examples[edit source] Given X = {1, 2, 3, 4}, the collection τ = {{}, {1, 2, 3, 4}} of only the two subsets of X required by the axioms forms a topology of X, the trivial topology (

#### Annotation 1758493281548

 #topology A topological space is an ordered pair (X, τ), where X is a set and τ is a topology of X

#### Parent (intermediate) annotation

Open it
A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ.

#### Original toplevel document

Topological space - Wikipedia
three-point set {1,2,3}. The bottom-left example is not a topology because the union of {2} and {3} [i.e. {2,3}] is missing; the bottom-right example is not a topology because the intersection of {1,2} and {2,3} [i.e. {2}], is missing. <span>A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ. The elements of τ are called open sets and the collection τ is called a topology on X. Examples[edit source] Given X = {1, 2, 3, 4}, the collection τ = {{}, {1, 2, 3, 4}} of only the two subsets of X required by the axioms forms a topology of X, the trivial topology (

#### Flashcard 1758495640844

Tags
#topology
Question
A [...] is an ordered pair (X, τ), where X is a set and τ is a topology of X
topological space

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A topological space is an ordered pair (X, τ), where X is a set and τ is a topology of X

#### Original toplevel document

Topological space - Wikipedia
three-point set {1,2,3}. The bottom-left example is not a topology because the union of {2} and {3} [i.e. {2,3}] is missing; the bottom-right example is not a topology because the intersection of {1,2} and {2,3} [i.e. {2}], is missing. <span>A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ. The elements of τ are called open sets and the collection τ is called a topology on X. Examples[edit source] Given X = {1, 2, 3, 4}, the collection τ = {{}, {1, 2, 3, 4}} of only the two subsets of X required by the axioms forms a topology of X, the trivial topology (

#### Flashcard 1758497213708

Tags
#topology
Question
The elements of [...] are called open sets
A topology

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
ion of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ. <span>The elements of τ are called open sets and the collection τ is called a topology on X. <span><body><html>

#### Original toplevel document

Topological space - Wikipedia
three-point set {1,2,3}. The bottom-left example is not a topology because the union of {2} and {3} [i.e. {2,3}] is missing; the bottom-right example is not a topology because the intersection of {1,2} and {2,3} [i.e. {2}], is missing. <span>A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ. The elements of τ are called open sets and the collection τ is called a topology on X. Examples[edit source] Given X = {1, 2, 3, 4}, the collection τ = {{}, {1, 2, 3, 4}} of only the two subsets of X required by the axioms forms a topology of X, the trivial topology (

#### Flashcard 1758499835148

Tags
#topology
Question
A [...] is a collection of subsets of X satisfying certain axioms (inclusion, infinite union, finit intersection).
topology

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ.

#### Original toplevel document

Topological space - Wikipedia
three-point set {1,2,3}. The bottom-left example is not a topology because the union of {2} and {3} [i.e. {2,3}] is missing; the bottom-right example is not a topology because the intersection of {1,2} and {2,3} [i.e. {2}], is missing. <span>A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ. The elements of τ are called open sets and the collection τ is called a topology on X. Examples[edit source] Given X = {1, 2, 3, 4}, the collection τ = {{}, {1, 2, 3, 4}} of only the two subsets of X required by the axioms forms a topology of X, the trivial topology (

#### Annotation 1758504291596

 #topological-space A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets.

Connected space - Wikipedia
ected spaces 2 Examples 3 Path connectedness 4 Arc connectedness 5 Local connectedness 6 Set operations 7 Theorems 8 Graphs 9 Stronger forms of connectedness 10 See also 11 References 12 Further reading Formal definition[edit source] <span>A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets. Otherwise, X is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with it

#### Flashcard 1758506913036

Tags
#topological-space
Question
A topological space X is said to be [...] if it is the union of two disjoint nonempty open sets.
disconnected

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets.

#### Original toplevel document

Connected space - Wikipedia
ected spaces 2 Examples 3 Path connectedness 4 Arc connectedness 5 Local connectedness 6 Set operations 7 Theorems 8 Graphs 9 Stronger forms of connectedness 10 See also 11 References 12 Further reading Formal definition[edit source] <span>A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets. Otherwise, X is said to be connected. A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the empty set (with it

#### Flashcard 1759925898508

Tags
#topology
Question
A topology must satisfy axioms of [...12...]
inclusion, complete under infinite union and finit intersection

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ.

#### Original toplevel document

Topological space - Wikipedia
three-point set {1,2,3}. The bottom-left example is not a topology because the union of {2} and {3} [i.e. {2,3}] is missing; the bottom-right example is not a topology because the intersection of {1,2} and {2,3} [i.e. {2}], is missing. <span>A topological space is an ordered pair (X, τ), where X is a set and τ is a collection of subsets of X, satisfying the following axioms: [7] The empty set and X itself belong to τ. Any (finite or infinite) union of members of τ still belongs to τ. The intersection of any finite number of members of τ still belongs to τ. The elements of τ are called open sets and the collection τ is called a topology on X. Examples[edit source] Given X = {1, 2, 3, 4}, the collection τ = {{}, {1, 2, 3, 4}} of only the two subsets of X required by the axioms forms a topology of X, the trivial topology (

#### Annotation 1760008736012

 #stochastics In mathematics, the Kronecker delta is a function of two variables that equals 1 if the variables are equal, and 0 otherwise:

Kronecker delta - Wikipedia
pedia Kronecker delta From Wikipedia, the free encyclopedia Jump to: navigation, search Not to be confused with the Dirac delta function, nor with the Kronecker symbol. <span>In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: δ i j = {

#### Flashcard 1760011619596

Tags
#stochastics
Question

the Kronecker delta is a function of two variables that [...]

equals 1 if the variables are equal, and 0 otherwise:

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In mathematics, the Kronecker delta is a function of two variables that equals 1 if the variables are equal, and 0 otherwise:

#### Original toplevel document

Kronecker delta - Wikipedia
pedia Kronecker delta From Wikipedia, the free encyclopedia Jump to: navigation, search Not to be confused with the Dirac delta function, nor with the Kronecker symbol. <span>In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: δ i j = {

#### Flashcard 1760013192460

Tags
#stochastics
Question

the [...] is a function of two variables that equals 1 if the variables are equal, and 0 otherwise:

Kronecker delta

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In mathematics, the Kronecker delta is a function of two variables that equals 1 if the variables are equal, and 0 otherwise:

#### Original toplevel document

Kronecker delta - Wikipedia
pedia Kronecker delta From Wikipedia, the free encyclopedia Jump to: navigation, search Not to be confused with the Dirac delta function, nor with the Kronecker symbol. <span>In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: δ i j = {

#### Flashcard 1767389662476

Tags
#topology
Question
A complement of an open set is always relative to [...]
a certain topology in a certain space

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A complement of an open set (relative to the space that the topology is defined on) is called a closed set.

#### Original toplevel document

Open set - Wikipedia
l space. There are, however, topological spaces that are not metric spaces. Properties The union of any number of open sets, or infinitely many open sets, is open. [2] The intersection of a finite number of open sets is open. [2] <span>A complement of an open set (relative to the space that the topology is defined on) is called a closed set. A set may be both open and closed (a clopen set). The empty set and the full space are examples of sets that are both open and closed. [3] Uses Open sets have a fundamental im

#### Flashcard 1782267383052

Tags
#kalman-filter
Question
Kalman filtering is known as [...]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend

#### Original toplevel document

Kalman filter - Wikipedia
into account; P k ∣ k − 1 {\displaystyle P_{k\mid k-1}} is the corresponding uncertainty. <span>Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, one of the primary developers of its theory. The Kalman filter has numerous applications in technology. A common application is for guidanc

#### Flashcard 1791689886988

Tags
#singular-value-decomposition
Question
singular-value decomposition (SVD) generalises [...] of a positive semidefinite normal matrix to any matrix

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In linear algebra, the singular-value decomposition (SVD) generalises the eigendecomposition of a positive semidefinite normal matrix (for example, a symmetric matrix with positive eigenvalues) to any matrix via an extension of the polar deco

#### Original toplevel document

Singular-value decomposition - Wikipedia
nto three simple transformations: an initial rotation V ∗ , a scaling Σ along the coordinate axes, and a final rotation U. The lengths σ 1 and σ 2 of the semi-axes of the ellipse are the singular values of M, namely Σ 1,1 and Σ 2,2 . <span>In linear algebra, the singular-value decomposition (SVD) is a factorization of a real or complex matrix. It is the generalization of the eigendecomposition of a positive semidefinite normal matrix (for example, a symmetric matrix with positive eigenvalues) to any m × n {\displaystyle m\times n} matrix via an extension of the polar decomposition. It has many useful applications in signal processing and statistics. Formally, the singular-value decomposition of an m × n {\d

#### Annotation 1791694343436

 #matrix-decomposition a complex square matrix A is normal if

Normal matrix - Wikipedia
Normal matrix - Wikipedia Normal matrix From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics, a complex square matrix A is normal if A ∗ A = A A ∗ {\displaystyle A^{*}A=AA^{*}} where A ∗ is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose. A real square matrix A satisfies A ∗ = A T , and is therefo

#### Annotation 1791695916300

 #matrix-decomposition #similar-matrix A matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix A satisfying the equation A∗A = AA∗ is diagonalizable.

Normal matrix - Wikipedia
displaystyle A^{*}A=AA^{*}} where A ∗ is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose. A real square matrix A satisfies A ∗ = A T , and is therefore normal if A T A = AA T . <span>A matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix A satisfying the equation A ∗ A = AA ∗ is diagonalizable. The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As in the matrix case, normality means

#### Flashcard 1791698275596

Tags
#matrix-decomposition
Question

a complex square matrix A is normal if [...]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
a complex square matrix A is normal if

#### Original toplevel document

Normal matrix - Wikipedia
Normal matrix - Wikipedia Normal matrix From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics, a complex square matrix A is normal if A ∗ A = A A ∗ {\displaystyle A^{*}A=AA^{*}} where A ∗ is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose. A real square matrix A satisfies A ∗ = A T , and is therefo

#### Flashcard 1791700634892

Tags
#matrix-decomposition
Question

a complex square matrix A is [...] if

normal

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
a complex square matrix A is normal if

#### Original toplevel document

Normal matrix - Wikipedia
Normal matrix - Wikipedia Normal matrix From Wikipedia, the free encyclopedia Jump to: navigation, search In mathematics, a complex square matrix A is normal if A ∗ A = A A ∗ {\displaystyle A^{*}A=AA^{*}} where A ∗ is the conjugate transpose of A. That is, a matrix is normal if it commutes with its conjugate transpose. A real square matrix A satisfies A ∗ = A T , and is therefo

#### Annotation 1791704829196

 #matrix a complex square matrix U is unitary if its conjugate transpose U∗ is also its inverse

Unitary matrix - Wikipedia
over the real number field, see orthogonal matrix. For the restriction on the allowed evolution of quantum systems that ensures the sum of probabilities of all possible outcomes of any event always equals 1, see unitarity. In mathematics, <span>a complex square matrix U is unitary if its conjugate transpose U ∗ is also its inverse—that is, if U ∗ U = U U ∗

#### Flashcard 1791706402060

Tags
#matrix
Question
a complex square matrix U is unitary if [...description...]
its conjugate transpose U is also its inverse

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
a complex square matrix U is unitary if its conjugate transpose U ∗ is also its inverse

#### Original toplevel document

Unitary matrix - Wikipedia
over the real number field, see orthogonal matrix. For the restriction on the allowed evolution of quantum systems that ensures the sum of probabilities of all possible outcomes of any event always equals 1, see unitarity. In mathematics, <span>a complex square matrix U is unitary if its conjugate transpose U ∗ is also its inverse—that is, if U ∗ U = U U ∗

#### Flashcard 1791707974924

Tags
#matrix
Question
a complex square matrix U is [...] if its conjugate transpose U is also its inverse
unitary

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
a complex square matrix U is unitary if its conjugate transpose U ∗ is also its inverse

#### Original toplevel document

Unitary matrix - Wikipedia
over the real number field, see orthogonal matrix. For the restriction on the allowed evolution of quantum systems that ensures the sum of probabilities of all possible outcomes of any event always equals 1, see unitarity. In mathematics, <span>a complex square matrix U is unitary if its conjugate transpose U ∗ is also its inverse—that is, if U ∗ U = U U ∗

#### Annotation 1791716363532

 #similar-matrix two n-by-n matrices A and B are similar if for some invertible n-by-n matrix P . Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix.[1][2]

Matrix similarity - Wikipedia
From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Similarity (geometry) and Similarity transformation (disambiguation). Not to be confused with similarity matrix. <span>In linear algebra, two n-by-n matrices A and B are called similar if B = P − 1 A P {\displaystyle B=P^{-1}AP} for some invertible n-by-n matrix P. Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2] A transformation A ↦ P −1 AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and simi

#### Annotation 1791717936396

 #similar-matrix #spectral-theorem In the definition of similarity, if the matrix P can be chosen to be a permutation matrix then A and B are permutation-similar; if P can be chosen to be a unitary matrix then A and B are unitarily equivalent. The spectral theorem says that every normal matrix is unitarily equivalent to some diagonal matrix. Specht's theorem states that two matrices are unitarily equivalent if and only if they satisfy certain trace equalities.

Matrix similarity - Wikipedia
ices over L. This is so because the rational canonical form over K is also the rational canonical form over L. This means that one may use Jordan forms that only exist over a larger field to determine whether the given matrices are similar. <span>In the definition of similarity, if the matrix P can be chosen to be a permutation matrix then A and B are permutation-similar; if P can be chosen to be a unitary matrix then A and B are unitarily equivalent. The spectral theorem says that every normal matrix is unitarily equivalent to some diagonal matrix. Specht's theorem states that two matrices are unitarily equivalent if and only if they satisfy certain trace equalities. See also[edit source] Canonical forms Matrix congruence Matrix equivalence Notes[edit source] Jump up ^ Beauregard & Fraleigh (1973, pp. 240–243) Jump up ^ Bronson (1970

#### Flashcard 1791721082124

Tags
#similar-matrix
Question

two n-by-n matrices A and B are similar if [...] for some invertible n-by-n matrix P .

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
two n-by-n matrices A and B are similar if for some invertible n-by-n matrix P . Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2] &#

#### Original toplevel document

Matrix similarity - Wikipedia
From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Similarity (geometry) and Similarity transformation (disambiguation). Not to be confused with similarity matrix. <span>In linear algebra, two n-by-n matrices A and B are called similar if B = P − 1 A P {\displaystyle B=P^{-1}AP} for some invertible n-by-n matrix P. Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2] A transformation A ↦ P −1 AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and simi

#### Flashcard 1791723441420

Tags
#similar-matrix
Question

two n-by-n matrices A and B are [...] if for some invertible n-by-n matrix P .

similar

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
two n-by-n matrices A and B are similar if for some invertible n-by-n matrix P . Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2] &#

#### Original toplevel document

Matrix similarity - Wikipedia
From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Similarity (geometry) and Similarity transformation (disambiguation). Not to be confused with similarity matrix. <span>In linear algebra, two n-by-n matrices A and B are called similar if B = P − 1 A P {\displaystyle B=P^{-1}AP} for some invertible n-by-n matrix P. Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2] A transformation A ↦ P −1 AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and simi

#### Flashcard 1791725276428

Tags
#similar-matrix
Question

Similar matrices represent the same linear operator under [...]

(possibly) different bases

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
two n-by-n matrices A and B are similar if for some invertible n-by-n matrix P . Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2]

#### Original toplevel document

Matrix similarity - Wikipedia
From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Similarity (geometry) and Similarity transformation (disambiguation). Not to be confused with similarity matrix. <span>In linear algebra, two n-by-n matrices A and B are called similar if B = P − 1 A P {\displaystyle B=P^{-1}AP} for some invertible n-by-n matrix P. Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2] A transformation A ↦ P −1 AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and simi

#### Flashcard 1791728422156

Tags
#similar-matrix
Question

Similar matrices represent the same [...] under (possibly) different bases

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
two n-by-n matrices A and B are similar if for some invertible n-by-n matrix P . Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2]

#### Original toplevel document

Matrix similarity - Wikipedia
From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Similarity (geometry) and Similarity transformation (disambiguation). Not to be confused with similarity matrix. <span>In linear algebra, two n-by-n matrices A and B are called similar if B = P − 1 A P {\displaystyle B=P^{-1}AP} for some invertible n-by-n matrix P. Similar matrices represent the same linear operator under two (possibly) different bases, with P being the change of basis matrix. [1] [2] A transformation A ↦ P −1 AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and simi

#### Annotation 1791737072908

 #similar-matrix #spectral-theorem The spectral theorem says that every normal matrix is unitarily equivalent to some diagonal matrix.

#### Parent (intermediate) annotation

Open it
> In the definition of similarity, if the matrix P can be chosen to be a permutation matrix then A and B are permutation-similar; if P can be chosen to be a unitary matrix then A and B are unitarily equivalent. The spectral theorem says that every normal matrix is unitarily equivalent to some diagonal matrix. Specht's theorem states that two matrices are unitarily equivalent if and only if they satisfy certain trace equalities. <body><html>

#### Original toplevel document

Matrix similarity - Wikipedia
ices over L. This is so because the rational canonical form over K is also the rational canonical form over L. This means that one may use Jordan forms that only exist over a larger field to determine whether the given matrices are similar. <span>In the definition of similarity, if the matrix P can be chosen to be a permutation matrix then A and B are permutation-similar; if P can be chosen to be a unitary matrix then A and B are unitarily equivalent. The spectral theorem says that every normal matrix is unitarily equivalent to some diagonal matrix. Specht's theorem states that two matrices are unitarily equivalent if and only if they satisfy certain trace equalities. See also[edit source] Canonical forms Matrix congruence Matrix equivalence Notes[edit source] Jump up ^ Beauregard & Fraleigh (1973, pp. 240–243) Jump up ^ Bronson (1970

#### Flashcard 1791738907916

Tags
#similar-matrix #spectral-theorem
Question
The [...] says that every normal matrix is unitarily equivalent to some diagonal matrix.