Edited, memorised or added to reading queue

on 09-Jan-2018 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#matrix-inversion

For , a pseudoinverse of is defined as a matrix satisfying all of the following four criteria:

  1. ( AA+ need not be the general identity matrix, but it maps all column vectors of A to themselves);
  2. ( A+ is a weak inverse for the multiplicative semigroup);
  3. ( AA+ is Hermitian); and
  4. ( A+A is also Hermitian).

Moore-Penrose Pseudo-inverse

exists for any matrix , but when the latter has full rank, can be expressed as a simple algebraic formula.

In particular, when has linearly independent columns (and thus matrix is invertible), can be computed as:

...
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Moore–Penrose inverse - Wikipedia
; K ) {\displaystyle I_{n}\in \mathrm {M} (n,n;K)} denotes the n × n {\displaystyle n\times n} identity matrix. Definition[edit source] <span>For A ∈ M ( m , n ; K ) {\displaystyle A\in \mathrm {M} (m,n;K)} , a pseudoinverse of A {\displaystyle A} is defined as a matrix A + ∈ M ( n , m ; K ) {\displaystyle A^{+}\in \mathrm {M} (n,m;K)} satisfying all of the following four criteria: [8] [9] A A + A = A {\displaystyle AA^{+}A=A\,\!} (AA + need not be the general identity matrix, but it maps all column vectors of A to themselves); A + A A + = A + {\displaystyle A^{+}AA^{+}=A^{+}\,\!} (A + is a weak inverse for the multiplicative semigroup); ( A A + ) ∗ = A A + {\displaystyle (AA^{+})^{*}=AA^{+}\,\!} (AA + is Hermitian); and ( A + A ) ∗ = A + A {\displaystyle (A^{+}A)^{*}=A^{+}A\,\!} (A + A is also Hermitian). A + {\displaystyle A^{+}} exists for any matrix A {\displaystyle A} , but when the latter has full rank, A + {\displaystyle A^{+}} can be expressed as a simple algebraic formula. In particular, when A {\displaystyle A} has linearly independent columns (and thus matrix A ∗ A {\displaystyle A^{*}A} is invertible), A + {\displaystyle A^{+}} can be computed as: A + = ( A ∗ A ) − 1 A ∗ . {\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}\,.} This particular pseudoinverse constitutes a left inverse, since, in this case, A + A = I {\displaystyle A^{+}A=I} . When A {\displaystyle A} has linearly independent rows (matrix A A ∗ {\displaystyle AA^{*}} is invertible), A + {\displaystyle A^{+}} can be computed as: A + = A ∗ ( A A ∗ ) − 1 . {\displaystyle A^{+}=A^{*}(AA^{*})^{-1}\,.} This is a right inverse, as A A + = I {\displaystyle AA^{+}=I} . Properties[edit source] Proofs for some of these facts may be found on a separate page Proofs involving the Moore–Penrose inverse. Existence and uniqueness[edit source] The pseu




pseudo datapoint based approximation methods for DGPs trade model complexity for a lower computational complexity of \(O(NLM^ 2 ) \) where N is the number of datapoints, L is the number of layers, and M is the number of pseudo datapoints. This complexity scales quadratically in M whereas the dependence on the number of layers L is only linear. Therefore, it can be cheaper to increase the representation power of the model by adding extra layers rather than by adding more pseudo datapoints
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#singular-value-decomposition
SVD as change of coordinates

The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : KnKm one can find orthonormal bases of Kn and Km such that T maps the i-th basis vector of Kn to a non-negative multiple of the i-th basis vector of Km , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Singular-value decomposition - Wikipedia
m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where σ i is the i-th diagonal entry of Σ, and T(V i ) = 0 for i > min(m,n). <span>The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavour of singular values and SVD factorization — at least when working on real vector spaces — consider the sphere S of radius one in R n . The linear map T map




#matrix-inversion
A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition.[1][9][15] If is the singular value decomposition of A , then . For a rectangular diagonal matrix such as Σ {\displaystyle \Sigma } , we get the pseudoinverse by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in place, and then transposing the matrix. In numerical computation, only elements larger than some small tolerance are taken to be nonzero, and the others are replaced by zeros. For example, in the MATLAB, GNU Octave, or NumPy function pinv , the tolerance is taken to be t = ε⋅max(m,n)⋅max(Σ) , where ε is the machine epsilon.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Moore–Penrose inverse - Wikipedia
A {\displaystyle A} and A ∗ {\displaystyle A^{*}} . Singular value decomposition (SVD)[edit source] <span>A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition. [1] [9] [15] If A = U Σ V ∗ {\displaystyle A=U\Sigma V^{*}} is the singular value decomposition of A, then A + = V Σ + U ∗ {\displaystyle A^{+}=V\Sigma ^{+}U^{*}} . For a rectangular diagonal matrix such as Σ {\displaystyle \Sigma } , we get the pseudoinverse by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in place, and then transposing the matrix. In numerical computation, only elements larger than some small tolerance are taken to be nonzero, and the others are replaced by zeros. For example, in the MATLAB, GNU Octave, or NumPy function pinv , the tolerance is taken to be t = ε⋅max(m,n)⋅max(Σ), where ε is the machine epsilon. The computational cost of this method is dominated by the cost of computing the SVD, which is several times higher than matrix–matrix multiplication, even if a state-of-the art implem




#matrix-inversion

Moore-Penrose Pseudo-inverse

exists for any matrix , but when the latter has full rank, can be expressed as a simple algebraic formula.

In particular, when has linearly independent columns (and thus matrix is invertible), can be computed as:

This particular pseudoinverse constitutes a left inverse, since, in this case, .

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
3; ( AA + need not be the general identity matrix, but it maps all column vectors of A to themselves); ( A + is a weak inverse for the multiplicative semigroup); ( AA + is Hermitian); and ( A + A is also Hermitian). <span>Moore-Penrose Pseudo-inverse exists for any matrix , but when the latter has full rank, can be expressed as a simple algebraic formula. In particular, when has linearly independent columns (and thus matrix is invertible), can be computed as: This particular pseudoinverse constitutes a left inverse, since, in this case, . When has linearly independent rows (matrix is invertible), can be computed as: This is a right inverse, as . <span><body><html>

Original toplevel document

Moore–Penrose inverse - Wikipedia
; K ) {\displaystyle I_{n}\in \mathrm {M} (n,n;K)} denotes the n × n {\displaystyle n\times n} identity matrix. Definition[edit source] <span>For A ∈ M ( m , n ; K ) {\displaystyle A\in \mathrm {M} (m,n;K)} , a pseudoinverse of A {\displaystyle A} is defined as a matrix A + ∈ M ( n , m ; K ) {\displaystyle A^{+}\in \mathrm {M} (n,m;K)} satisfying all of the following four criteria: [8] [9] A A + A = A {\displaystyle AA^{+}A=A\,\!} (AA + need not be the general identity matrix, but it maps all column vectors of A to themselves); A + A A + = A + {\displaystyle A^{+}AA^{+}=A^{+}\,\!} (A + is a weak inverse for the multiplicative semigroup); ( A A + ) ∗ = A A + {\displaystyle (AA^{+})^{*}=AA^{+}\,\!} (AA + is Hermitian); and ( A + A ) ∗ = A + A {\displaystyle (A^{+}A)^{*}=A^{+}A\,\!} (A + A is also Hermitian). A + {\displaystyle A^{+}} exists for any matrix A {\displaystyle A} , but when the latter has full rank, A + {\displaystyle A^{+}} can be expressed as a simple algebraic formula. In particular, when A {\displaystyle A} has linearly independent columns (and thus matrix A ∗ A {\displaystyle A^{*}A} is invertible), A + {\displaystyle A^{+}} can be computed as: A + = ( A ∗ A ) − 1 A ∗ . {\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}\,.} This particular pseudoinverse constitutes a left inverse, since, in this case, A + A = I {\displaystyle A^{+}A=I} . When A {\displaystyle A} has linearly independent rows (matrix A A ∗ {\displaystyle AA^{*}} is invertible), A + {\displaystyle A^{+}} can be computed as: A + = A ∗ ( A A ∗ ) − 1 . {\displaystyle A^{+}=A^{*}(AA^{*})^{-1}\,.} This is a right inverse, as A A + = I {\displaystyle AA^{+}=I} . Properties[edit source] Proofs for some of these facts may be found on a separate page Proofs involving the Moore–Penrose inverse. Existence and uniqueness[edit source] The pseu




Flashcard 1731444477196

Tags
#matrix-inversion
Question

when has [...] the Moore-Penrose inverse is a left inverse

Answer
linearly independent columns

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ml> Moore-Penrose Pseudo-inverse exists for any matrix , but when the latter has full rank, can be expressed as a simple algebraic formula. In particular, when has linearly independent columns (and thus matrix is invertible), can be computed as: This particular pseudoinverse constitutes a left inverse, since, in this case, . <html>

Original toplevel document

Moore–Penrose inverse - Wikipedia
; K ) {\displaystyle I_{n}\in \mathrm {M} (n,n;K)} denotes the n × n {\displaystyle n\times n} identity matrix. Definition[edit source] <span>For A ∈ M ( m , n ; K ) {\displaystyle A\in \mathrm {M} (m,n;K)} , a pseudoinverse of A {\displaystyle A} is defined as a matrix A + ∈ M ( n , m ; K ) {\displaystyle A^{+}\in \mathrm {M} (n,m;K)} satisfying all of the following four criteria: [8] [9] A A + A = A {\displaystyle AA^{+}A=A\,\!} (AA + need not be the general identity matrix, but it maps all column vectors of A to themselves); A + A A + = A + {\displaystyle A^{+}AA^{+}=A^{+}\,\!} (A + is a weak inverse for the multiplicative semigroup); ( A A + ) ∗ = A A + {\displaystyle (AA^{+})^{*}=AA^{+}\,\!} (AA + is Hermitian); and ( A + A ) ∗ = A + A {\displaystyle (A^{+}A)^{*}=A^{+}A\,\!} (A + A is also Hermitian). A + {\displaystyle A^{+}} exists for any matrix A {\displaystyle A} , but when the latter has full rank, A + {\displaystyle A^{+}} can be expressed as a simple algebraic formula. In particular, when A {\displaystyle A} has linearly independent columns (and thus matrix A ∗ A {\displaystyle A^{*}A} is invertible), A + {\displaystyle A^{+}} can be computed as: A + = ( A ∗ A ) − 1 A ∗ . {\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}\,.} This particular pseudoinverse constitutes a left inverse, since, in this case, A + A = I {\displaystyle A^{+}A=I} . When A {\displaystyle A} has linearly independent rows (matrix A A ∗ {\displaystyle AA^{*}} is invertible), A + {\displaystyle A^{+}} can be computed as: A + = A ∗ ( A A ∗ ) − 1 . {\displaystyle A^{+}=A^{*}(AA^{*})^{-1}\,.} This is a right inverse, as A A + = I {\displaystyle AA^{+}=I} . Properties[edit source] Proofs for some of these facts may be found on a separate page Proofs involving the Moore–Penrose inverse. Existence and uniqueness[edit source] The pseu







Flashcard 1731448147212

Tags
#matrix-inversion
Question

The left Moore-Penrose Pseudo-inverse is [...]

Answer

This is the one for linear models


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
e exists for any matrix , but when the latter has full rank, can be expressed as a simple algebraic formula. In particular, when has linearly independent columns (and thus matrix is invertible), can be computed as<span>: This particular pseudoinverse constitutes a left inverse, since, in this case, . <span><body><html>

Original toplevel document

Moore–Penrose inverse - Wikipedia
; K ) {\displaystyle I_{n}\in \mathrm {M} (n,n;K)} denotes the n × n {\displaystyle n\times n} identity matrix. Definition[edit source] <span>For A ∈ M ( m , n ; K ) {\displaystyle A\in \mathrm {M} (m,n;K)} , a pseudoinverse of A {\displaystyle A} is defined as a matrix A + ∈ M ( n , m ; K ) {\displaystyle A^{+}\in \mathrm {M} (n,m;K)} satisfying all of the following four criteria: [8] [9] A A + A = A {\displaystyle AA^{+}A=A\,\!} (AA + need not be the general identity matrix, but it maps all column vectors of A to themselves); A + A A + = A + {\displaystyle A^{+}AA^{+}=A^{+}\,\!} (A + is a weak inverse for the multiplicative semigroup); ( A A + ) ∗ = A A + {\displaystyle (AA^{+})^{*}=AA^{+}\,\!} (AA + is Hermitian); and ( A + A ) ∗ = A + A {\displaystyle (A^{+}A)^{*}=A^{+}A\,\!} (A + A is also Hermitian). A + {\displaystyle A^{+}} exists for any matrix A {\displaystyle A} , but when the latter has full rank, A + {\displaystyle A^{+}} can be expressed as a simple algebraic formula. In particular, when A {\displaystyle A} has linearly independent columns (and thus matrix A ∗ A {\displaystyle A^{*}A} is invertible), A + {\displaystyle A^{+}} can be computed as: A + = ( A ∗ A ) − 1 A ∗ . {\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}\,.} This particular pseudoinverse constitutes a left inverse, since, in this case, A + A = I {\displaystyle A^{+}A=I} . When A {\displaystyle A} has linearly independent rows (matrix A A ∗ {\displaystyle AA^{*}} is invertible), A + {\displaystyle A^{+}} can be computed as: A + = A ∗ ( A A ∗ ) − 1 . {\displaystyle A^{+}=A^{*}(AA^{*})^{-1}\,.} This is a right inverse, as A A + = I {\displaystyle AA^{+}=I} . Properties[edit source] Proofs for some of these facts may be found on a separate page Proofs involving the Moore–Penrose inverse. Existence and uniqueness[edit source] The pseu







Flashcard 1731451292940

Tags
#singular-value-decomposition
Question
geometrically SVD finds [...] for every linear map T : KnKm
Answer
orthonormal bases of Kn and Km

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
SVD as change of coordinates The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases,

Original toplevel document

Singular-value decomposition - Wikipedia
m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where σ i is the i-th diagonal entry of Σ, and T(V i ) = 0 for i > min(m,n). <span>The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavour of singular values and SVD factorization — at least when working on real vector spaces — consider the sphere S of radius one in R n . The linear map T map







Flashcard 1731453127948

Tags
#singular-value-decomposition
Question
Geometrically SVD finds orthonormal bases of Kn and Km for every linear map T : KnKm such that T maps the i-th basis vector of Kn to a non-negative multiple of the i-th basis vector of Km , and sends the left-over basis vectors to [...].
Answer
zero

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ollows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to <span>zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. <span><body><html>

Original toplevel document

Singular-value decomposition - Wikipedia
m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where σ i is the i-th diagonal entry of Σ, and T(V i ) = 0 for i > min(m,n). <span>The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavour of singular values and SVD factorization — at least when working on real vector spaces — consider the sphere S of radius one in R n . The linear map T map







Flashcard 1731454700812

Tags
#singular-value-decomposition
Question
With SVD geometrically every linear map T : KnKm is represented by a diagonal matrix with [...] entries.
Answer
non-negative real diagonal

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
h that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with <span>non-negative real diagonal entries. <span><body><html>

Original toplevel document

Singular-value decomposition - Wikipedia
m , n ) , {\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),} where σ i is the i-th diagonal entry of Σ, and T(V i ) = 0 for i > min(m,n). <span>The geometric content of the SVD theorem can thus be summarized as follows: for every linear map T : K n → K m one can find orthonormal bases of K n and K m such that T maps the i-th basis vector of K n to a non-negative multiple of the i-th basis vector of K m , and sends the left-over basis vectors to zero. With respect to these bases, the map T is therefore represented by a diagonal matrix with non-negative real diagonal entries. To get a more visual flavour of singular values and SVD factorization — at least when working on real vector spaces — consider the sphere S of radius one in R n . The linear map T map







#matrix
In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P−1AP is a diagonal matrix.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Diagonalizable matrix - Wikipedia
Diagonalizable matrix From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about matrix diagonalisation in linear algebra. For other uses, see Diagonalisation. <span>In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1 AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists an ordered basis of V with respect to which T is represented by a diagona




Flashcard 1731460205836

Tags
#matrix
Question
a square matrix A is called diagonalizable if it is similar to [...]
Answer

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1 AP is a diagonal matrix.

Original toplevel document

Diagonalizable matrix - Wikipedia
Diagonalizable matrix From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about matrix diagonalisation in linear algebra. For other uses, see Diagonalisation. <span>In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1 AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists an ordered basis of V with respect to which T is represented by a diagona







#english
Novelist Adam Langer skewers the publishing trade — and some of its recent trends — while digging toward something deeper.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Book review: 'The Thieves of Manhattan' by Adam Langer - latimes
30+ years Terms of Service Privacy Policy YOU ARE HERE: LAT Home→Collections Book review: 'The Thieves of Manhattan' by Adam Langer <span>Novelist Adam Langer skewers the publishing trade — and some of its recent trends — while digging toward something deeper. July 18, 2010|By Ella Taylor, Special to the Los Angeles Times Email Share The Thieves of Manhattan A Novel Adam Langer Spiegel & Grau: 260 pp., $15 paper




Thus does Ian, a committed realist who has yet to write or, for that matter, live an adventure of his own, become embroiled in a whole series of plots.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Book review: 'The Thieves of Manhattan' by Adam Langer - latimes
boost sales even further. Shades of James Frey and Margaret Seltzer, attention-seekers who thrilled a gullible public (not to mention their editors) with trumped-up accounts of drug addiction and childhood ghetto traumas they never endured. <span>Thus does Ian, a committed realist who has yet to write or, for that matter, live an adventure of his own, become embroiled in a whole series of plots. As they unfold, he gets a life, writes more than one book and falls hopelessly in love. All of which forces him to reassess the hazy borders between truth and fiction, life and art and




#logic
According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

The rise and fall and rise of logic | Aeon Essays
the hands of thinkers such as George Boole, Gottlob Frege, Bertrand Russell, Alfred Tarski and Kurt Gödel, it’s clear that Kant was dead wrong. But he was also wrong in thinking that there had been no progress since Aristotle up to his time. <span>According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries. (Throughout this piece, the focus is on the logical traditions that emerged against the background of ancient Greek logic. So Indian and Chinese logic are not included, but medieval Ara




#reinforcement-learning

Mountain Car, a standard testing domain in Reinforcement Learning, is a problem in which an under-powered car must drive up a steep hill. Since gravity is stronger than the car's engine, even at full throttle, the car cannot simply accelerate up the steep slope. The car is situated in a valley and must learn to leverage potential energy by driving up the opposite hill before the car is able to make it to the goal at the top of the rightmost hill. The domain has been used as a test bed in various Reinforcement Learning papers.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Mountain car problem - Wikipedia
[imagelink] This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (July 2012) [imagelink] The mountain car problem <span>Mountain Car, a standard testing domain in Reinforcement Learning, is a problem in which an under-powered car must drive up a steep hill. Since gravity is stronger than the car's engine, even at full throttle, the car cannot simply accelerate up the steep slope. The car is situated in a valley and must learn to leverage potential energy by driving up the opposite hill before the car is able to make it to the goal at the top of the rightmost hill. The domain has been used as a test bed in various Reinforcement Learning papers. Contents [hide] 1 Introduction 2 History 3 Techniques used to solve mountain car 3.1 Discretization 3.2 Function approximation 3.3 Traces 4 Technical details 4.1 State v




#spanish
Verbs: The Conditional Simple

Usage:

  1. To ask politely: ¿Podrías pasarme ese plato, por favor? (Could you pass me that plate, please?).
  2. To express wishes: ¡Me encantaría ir de viaje a Australia! (I would love to go on a trip to Australia!).
  3. To suggest: Creo que deberías ir al médico a verte ese dolor de espalda (I think you should go to see the doctor for a check-up on that back of yours).
  4. To express a hypothesis or probability: Podemos posponer la salida unas horas, pero llegaríamos bastante tarde (We can postpone our trip for some hours, but we would arrive quite late).
  5. To express uncertainty in the past: No sabía si estarías en la oficina, por eso no te llamé (I didn’t know whether you’d be at the office. That’s why I didn’t call you).
  6. To refer to the future from a moment in the past: María me dijo que estaría en casa para las 11, pero no ha aparecido aún (María told me she’d be at home by 11, but she hasn’t turned up yet).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




#spanish
Verbs: The Conditional Simple

Usage:

  1. To ask politely: ¿Podrías pasarme ese plato, por favor? (Could you pass me that plate, please?).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Open it
Verbs: The Conditional Simple Usage: To ask politely: ¿Podrías pasarme ese plato, por favor? (Could you pass me that plate, please?). To express wishes: ¡Me encantaría ir de viaje a Australia! (I would love to go on a trip to Australia!). To suggest: Creo que deberías ir al médico a verte ese dolor de espalda (I think




Flashcard 1731479866636

Tags
#spanish
Question
use conditional simple to [...]:

¿Podrías pasarme ese plato, por favor?
Answer
ask politely

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Verbs: The Conditional Simple Usage: To ask politely: ¿Podrías pasarme ese plato, por favor? (Could you pass me that plate, please?).

Original toplevel document

Open it
Verbs: The Conditional Simple Usage: To ask politely: ¿Podrías pasarme ese plato, por favor? (Could you pass me that plate, please?). To express wishes: ¡Me encantaría ir de viaje a Australia! (I would love to go on a trip to Australia!). To suggest: Creo que deberías ir al médico a verte ese dolor de espalda (I think







#spanish
  1. To refer to the future from a moment in the past: María me dijo que estaría en casa para las 11, pero no ha aparecido aún (María told me she’d be at home by 11, but she hasn’t turned up yet).
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Open it
e can postpone our trip for some hours, but we would arrive quite late). To express uncertainty in the past: No sabía si estarías en la oficina, por eso no te llamé (I didn’t know whether you’d be at the office. That’s why I didn’t call you). <span>To refer to the future from a moment in the past: María me dijo que estaría en casa para las 11, pero no ha aparecido aún (María told me she’d be at home by 11, but she hasn’t turned up yet). <span><body><html>




Flashcard 1731484323084

Tags
#spanish
Question
use conditional simple to [...]:
María me dijo que estaría en casa para las 11, pero no ha aparecido aún.
Answer
refer to the future from the past

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
To refer to the future from a moment in the past: María me dijo que estaría en casa para las 11, pero no ha aparecido aún (María told me she’d be at home by 11, but she hasn’t turned up yet).

Original toplevel document

Open it
e can postpone our trip for some hours, but we would arrive quite late). To express uncertainty in the past: No sabía si estarías en la oficina, por eso no te llamé (I didn’t know whether you’d be at the office. That’s why I didn’t call you). <span>To refer to the future from a moment in the past: María me dijo que estaría en casa para las 11, pero no ha aparecido aún (María told me she’d be at home by 11, but she hasn’t turned up yet). <span><body><html>







Flashcard 1731486158092

Tags
#spanish
Question
como [...] en el acta/informe
as stated or recorded in the minutes/report
Answer
consta

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 1731488255244

Tags
#spanish
Question
▸ ¿te falta mucho? — no, ya casi [...]
do you have much to do? — no, I've nearly finished
Answer
acabo

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 1731490090252

Tags
#spanish
Question
el tren [...] su salida a las 10.50
the train will depart at 10:50
Answer
efectuará

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






#reinforcement-learning
Mountain Car, a standard testing domain in Reinforcement Learning, is a problem in which an under-powered car must drive up a steep hill.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
Mountain Car, a standard testing domain in Reinforcement Learning, is a problem in which an under-powered car must drive up a steep hill. Since gravity is stronger than the car's engine, even at full throttle, the car cannot simply accelerate up the steep slope. The car is situated in a valley and must learn to leverage p

Original toplevel document

Mountain car problem - Wikipedia
[imagelink] This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (July 2012) [imagelink] The mountain car problem <span>Mountain Car, a standard testing domain in Reinforcement Learning, is a problem in which an under-powered car must drive up a steep hill. Since gravity is stronger than the car's engine, even at full throttle, the car cannot simply accelerate up the steep slope. The car is situated in a valley and must learn to leverage potential energy by driving up the opposite hill before the car is able to make it to the goal at the top of the rightmost hill. The domain has been used as a test bed in various Reinforcement Learning papers. Contents [hide] 1 Introduction 2 History 3 Techniques used to solve mountain car 3.1 Discretization 3.2 Function approximation 3.3 Traces 4 Technical details 4.1 State v




Flashcard 1731506867468

Tags
#reinforcement-learning
Question
Mountain Car, a standard testing domain in Reinforcement Learning, is a problem in which an [...] must drive up a steep hill.
Answer
under-powered car

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Mountain Car, a standard testing domain in Reinforcement Learning, is a problem in which an under-powered car must drive up a steep hill.

Original toplevel document

Mountain car problem - Wikipedia
[imagelink] This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (July 2012) [imagelink] The mountain car problem <span>Mountain Car, a standard testing domain in Reinforcement Learning, is a problem in which an under-powered car must drive up a steep hill. Since gravity is stronger than the car's engine, even at full throttle, the car cannot simply accelerate up the steep slope. The car is situated in a valley and must learn to leverage potential energy by driving up the opposite hill before the car is able to make it to the goal at the top of the rightmost hill. The domain has been used as a test bed in various Reinforcement Learning papers. Contents [hide] 1 Introduction 2 History 3 Techniques used to solve mountain car 3.1 Discretization 3.2 Function approximation 3.3 Traces 4 Technical details 4.1 State v







Flashcard 1731508440332

Tags
#logic
Question
According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were [...], the medieval scholastic period, and the mathematical period of the 19th and 20th centuries.
Answer
the ancient Greek period

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries.

Original toplevel document

The rise and fall and rise of logic | Aeon Essays
the hands of thinkers such as George Boole, Gottlob Frege, Bertrand Russell, Alfred Tarski and Kurt Gödel, it’s clear that Kant was dead wrong. But he was also wrong in thinking that there had been no progress since Aristotle up to his time. <span>According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries. (Throughout this piece, the focus is on the logical traditions that emerged against the background of ancient Greek logic. So Indian and Chinese logic are not included, but medieval Ara







Flashcard 1731510013196

Tags
#logic
Question
According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, [...], and the mathematical period of the 19th and 20th centuries.
Answer
the medieval scholastic period

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries.

Original toplevel document

The rise and fall and rise of logic | Aeon Essays
the hands of thinkers such as George Boole, Gottlob Frege, Bertrand Russell, Alfred Tarski and Kurt Gödel, it’s clear that Kant was dead wrong. But he was also wrong in thinking that there had been no progress since Aristotle up to his time. <span>According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries. (Throughout this piece, the focus is on the logical traditions that emerged against the background of ancient Greek logic. So Indian and Chinese logic are not included, but medieval Ara







Flashcard 1731511586060

Tags
#logic
Question
According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and [...] of the 19th and 20th centuries.
Answer
the mathematical period

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries.

Original toplevel document

The rise and fall and rise of logic | Aeon Essays
the hands of thinkers such as George Boole, Gottlob Frege, Bertrand Russell, Alfred Tarski and Kurt Gödel, it’s clear that Kant was dead wrong. But he was also wrong in thinking that there had been no progress since Aristotle up to his time. <span>According to A History of Formal Logic (1961) by the distinguished J M Bocheński, the golden periods for logic were the ancient Greek period, the medieval scholastic period, and the mathematical period of the 19th and 20th centuries. (Throughout this piece, the focus is on the logical traditions that emerged against the background of ancient Greek logic. So Indian and Chinese logic are not included, but medieval Ara







Flashcard 1731513158924

Question
Thus does Ian, a committed realist who has yet to write or, for that matter, live an adventure of his own, become [...] in a whole series of plots.

involve (someone) deeply in an argument, conflict, or difficult situation
Answer
embroiled

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Thus does Ian, a committed realist who has yet to write or, for that matter, live an adventure of his own, become embroiled in a whole series of plots.

Original toplevel document

Book review: 'The Thieves of Manhattan' by Adam Langer - latimes
boost sales even further. Shades of James Frey and Margaret Seltzer, attention-seekers who thrilled a gullible public (not to mention their editors) with trumped-up accounts of drug addiction and childhood ghetto traumas they never endured. <span>Thus does Ian, a committed realist who has yet to write or, for that matter, live an adventure of his own, become embroiled in a whole series of plots. As they unfold, he gets a life, writes more than one book and falls hopelessly in love. All of which forces him to reassess the hazy borders between truth and fiction, life and art and







Flashcard 1731515518220

Question
Novelist Adam Langer [...] the publishing trade — and some of its recent trends — while digging toward something deeper.

fasten together or pierce with
informal subject to sharp criticism or critical analysis
Answer
skewers

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Novelist Adam Langer skewers the publishing trade — and some of its recent trends — while digging toward something deeper.

Original toplevel document

Book review: 'The Thieves of Manhattan' by Adam Langer - latimes
30+ years Terms of Service Privacy Policy YOU ARE HERE: LAT Home→Collections Book review: 'The Thieves of Manhattan' by Adam Langer <span>Novelist Adam Langer skewers the publishing trade — and some of its recent trends — while digging toward something deeper. July 18, 2010|By Ella Taylor, Special to the Los Angeles Times Email Share The Thieves of Manhattan A Novel Adam Langer Spiegel & Grau: 260 pp., $15 paper







#matrix-inversion
A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition.[1][9][15] If is the singular value decomposition of A , then .
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition. [1] [9] [15] If is the singular value decomposition of A , then . For a rectangular diagonal matrix such as Σ {\displaystyle \Sigma } , we get the pseudoinverse by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in p

Original toplevel document

Moore–Penrose inverse - Wikipedia
A {\displaystyle A} and A ∗ {\displaystyle A^{*}} . Singular value decomposition (SVD)[edit source] <span>A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition. [1] [9] [15] If A = U Σ V ∗ {\displaystyle A=U\Sigma V^{*}} is the singular value decomposition of A, then A + = V Σ + U ∗ {\displaystyle A^{+}=V\Sigma ^{+}U^{*}} . For a rectangular diagonal matrix such as Σ {\displaystyle \Sigma } , we get the pseudoinverse by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in place, and then transposing the matrix. In numerical computation, only elements larger than some small tolerance are taken to be nonzero, and the others are replaced by zeros. For example, in the MATLAB, GNU Octave, or NumPy function pinv , the tolerance is taken to be t = ε⋅max(m,n)⋅max(Σ), where ε is the machine epsilon. The computational cost of this method is dominated by the cost of computing the SVD, which is several times higher than matrix–matrix multiplication, even if a state-of-the art implem




Flashcard 1731520498956

Tags
#matrix-inversion
Question
If is the singular value decomposition of A , then the pseudoinverse of A is [...]
Answer
.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition. [1] [9] [15] If is the singular value decomposition of A , then .

Original toplevel document

Moore–Penrose inverse - Wikipedia
A {\displaystyle A} and A ∗ {\displaystyle A^{*}} . Singular value decomposition (SVD)[edit source] <span>A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition. [1] [9] [15] If A = U Σ V ∗ {\displaystyle A=U\Sigma V^{*}} is the singular value decomposition of A, then A + = V Σ + U ∗ {\displaystyle A^{+}=V\Sigma ^{+}U^{*}} . For a rectangular diagonal matrix such as Σ {\displaystyle \Sigma } , we get the pseudoinverse by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in place, and then transposing the matrix. In numerical computation, only elements larger than some small tolerance are taken to be nonzero, and the others are replaced by zeros. For example, in the MATLAB, GNU Octave, or NumPy function pinv , the tolerance is taken to be t = ε⋅max(m,n)⋅max(Σ), where ε is the machine epsilon. The computational cost of this method is dominated by the cost of computing the SVD, which is several times higher than matrix–matrix multiplication, even if a state-of-the art implem







pseudo datapoint based approximation methods for DGPs trade model complexity for a lower computational complexity of \(O(NLM^ 2 ) \) where N is the number of datapoints, L is the number of layers, and M is the number of pseudo datapoints.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
pseudo datapoint based approximation methods for DGPs trade model complexity for a lower computational complexity of \(O(NLM^ 2 ) \) where N is the number of datapoints, L is the number of layers, and M is the number of pseudo datapoints. This complexity scales quadratically in M whereas the dependence on the number of layers L is only linear. Therefore, it can be cheaper to increase the representation power of the model

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 1731525217548

Tags
#deep-gaussian-process
Question
pseudo datapoint based approximation methods for DGPs has a computational complexity of [...]
Answer
\(O(NLM^ 2 ) \)

where N is the number of datapoints, L is the number of layers, and M is the number of pseudo datapoints.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
pseudo datapoint based approximation methods for DGPs trade model complexity for a lower computational complexity of \(O(NLM^ 2 ) \) where N is the number of datapoints, L is the number of layers, and M is the number of pseudo datapoints.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1731526790412

Tags
#deep-gaussian-process
Question
DGPs can perform [...] or dimensionality compression or expansion
Answer
input warping

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
DGPs can perform input warping or dimensionality compression or expansion, and automatically learn to construct a kernel that works well for the data at hand. As a result, learning in this model provides a flexible f

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1731528363276

Tags
#deep-gaussian-process
Question
DGPs can perform input warping or [...]
Answer
dimensionality compression or expansion

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
DGPs can perform input warping or dimensionality compression or expansion, and automatically learn to construct a kernel that works well for the data at hand. As a result, learning in this model provides a flexible form of Bayesian kernel design. </sp

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1731529936140

Tags
#deep-gaussian-process
Question
DGPs can automatically learn to [...] that works well for the data at hand.
Answer
construct a kernel

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
DGPs can perform input warping or dimensionality compression or expansion, and automatically learn to construct a kernel that works well for the data at hand. As a result, learning in this model provides a flexible form of Bayesian kernel design.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1731531509004

Tags
#deep-gaussian-process
Question
The new method uses an [...] procedure and a novel and efficient extension of the probabilistic backpropagation algorithm for learning.
Answer
approximate Expectation Propagation

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The new method uses an approximate Expectation Propagation procedure and a novel and efficient ex- tension of the probabilistic backpropagation algorithm for learning.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1731533081868

Question
Deep Gaussian processes (DGPs) are [...] of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.
Answer
multi-layer hierarchical generalisations

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian pro- cesses (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1731535703308

Tags
#variational-inference
Question
Variational Bayesian methods are a family of techniques for approximating [...] arising in Bayesian inference and machine learning.
Answer
intractable integrals

In Bayesian inference this manifests as calculating marginal posteriors

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning.

Original toplevel document

Variational Bayesian methods - Wikipedia
f references, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations. (September 2010) (Learn how and when to remove this template message) <span>Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually termed "data") as well as unknown parameters and latent variables, with various







Flashcard 1731550383372

Question
Artículo 114.- Ningún Senador o Representante, desde el día de su elección hasta el de su cese, podrá ser acusado criminalmente, ni aun por delitos comunes que no sean de los detallados en el artículo 93, sino ante su respectiva Cámara, la cual, por dos tercios de votos del total de sus componentes, resolverá si hay lugar a la formación de causa, y, en caso afirmativo, lo declarará suspendido en sus funciones y quedará a disposición del Tribunal competente.
Answer
[default - edit me]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs