Edited, memorised or added to reading queue

on 07-Mar-2017 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 1429070548236

Tags
#sister-miriam-joseph #trivium
Question
Valuable goods are those which are not only desired for their own sake but which [...] For instance, [...] are valuable goods.
Answer
increase the intrinsic worth of their possessor.

knowledge, virtue, and health.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Valuable goods are those which are not only desired for their own sake but which increase the intrinsic worth of their possessor. For instance, knowledge, virtue, and health are valuable goods.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1442931412236

Tags
#estructura-interna-de-las-palabras #formantes-morfológicos #gramatica-española #la #morfología #tulio
Question

Para establecer la estructura interna de las palabras, la morfología se ocupa de:

a. identificar [...]

b. determinar las posibles variaciones que éstos presenten;

c. describir los procesos involucrados;

d. reconocer la organización de las palabras.

Answer
los formantes morfológicos;

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Para establecer la estructura interna de las palabras, la morfología se ocupa de: a . identificar los formantes morfológicos; b . determinar las posibles variaciones que éstos presenten; c . describir los procesos involucrados; d . reconocer la organización de las palabras

Original toplevel document

La estructura interna de la palabra
1. Los formantes morfológicos Una palabra tiene estructura interna cuando contiene más de un formante morfológico. Un formante morfológico o morfema es una unidad mínima que consta de una forma fonética y de un significado. Comparemos las siguientes palabras: gota, gotas, gotita, gotera, cuentagotas. Gota es la única de estas palabras que consta de un solo formante. Carece, entonces, de estructura interna. Es una palabra simple. Todas las otras palabras tienen estructura interna. [31] Los formantes que pueden aparecer como palabras independientes son formas libres. Los otros, los que necesariamente van adosados a otros morfe- mas, son formas ligadas. Cuentagotas contiene dos formantes que pueden aparecer cada uno como palabra independiente. Es una palabra compuesta. Gotas, gotita y gotera también contienen dos formantes, pero uno de ellos (-s, -ita, -era) nunca puede ser una palabra independiente. Son formas ligadas que se denominan afijos. Algunos afijos van pospuestos a la base (gota), como los de nuestros ejemplos: son los s u f i j o s . Otros afijos la preceden: in-útil, des-contento, a-político: Son los prefijos. Las palabras que contienen un afijo se denominan palabras complejas. Del inventario de formantes reconocidos, reconoceremos dos clases: a. Algunos son formantes léxicos: tienen un significado léxico, que se define en el diccionario: gota, cuenta. Se agrupan en clases abiertas. Pertenecen a una clase particular de palabras: sustantivos (gota), adjetivos (útil), adverbios (ayer), verbos (cuenta). Pueden ser: - palabras simples (gota, útil, ayer); - base a la que se adosan los afijos en palabras complejas (got-, politic-); - parte de una palabra, compuesta (cuenta, gotas). b. Otros son formantes gramaticales: tienen significado gramatical, no léxico. Se agrupan en clases cerradas. Pueden ser: - palabras independientes: preposiciones (a, de, por), conjunciones (que, si); - afijos en palabras derivadas (-s, -ero, in-, des-); - menos frecuentemente, formantes de compuestos (aun-que, por-que, si-no). Entre las palabras no simples consideradas hasta aquí, cada una contenía sólo dos formantes. En otras un mismo tipo de formantes se repite: - sufijos: region-al-izar, util-iza-ble; - prefijos: des-com-poner. ex-pro-soviético, o también formantes de diferentes tipos pueden combinarse entre sí: - prefijo y sufijo: des-leal-tad, em-pobr-ecer; - palabra compuesta y sufijo: rionegr-ino, narcotrafic-ante. En la combinación de prefijación y sufijación, se distinguen dos casos, ilustrados en nuestros ejemplos. En deslealtad, la aplicación de cada uno de los afijos da como resultado una palabra bien formada: si aplicamos sólo el prefijo se obtiene el adjetivo desleal; si aplicamos sólo el sufijo el resultado será el sustantivo lealtad. En cambio, en empobrecer, si se aplica sólo un afijo [32] el resultado no será una palabra existente: *empobre, *pobrecer. Prefijo y sufijo se aplican simultáneamente, constituyendo un único formante morfológico – discontinuo– que se añade a ambos lados de la base léxica. Este segundo caso se denomina parasíntesis. Para establecer la estructura interna de las palabras, la morfología se ocupa de: a. identificar los formantes morfológicos; b. determinar las posibles variaciones que éstos presenten; c. describir los procesos involucrados; d. reconocer la organización de las palabras. 2. Identificación de los formantes morfológicos Comparemos ahora las siguientes palabras: sol, sol-ar; sol-azo, quita- sol, gira-sol, solter-o, solaz. En las







#deeplearning #neuralnetworks
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Cov(x)i,j = Cov( x i , x j ) . (3.14) The diagonal elements of the covariance give the variance: Cov( x i , x i ) = Var( x i )
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deeplearning #neuralnetworks
The Bernoulli distribution is a distribution ov er a single binary random v ariable. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties:
P (x = 1) = φ
P (x = 0) = 1-φ
P (x = x ) = φ x (1 − φ) 1 − x
E x [x] = φ
V ar (x) = φ (1− φ)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deeplearning #neuralnetworks
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y defining a PDF using the Dirac delta function, : p(x) = δ(x-µ) The Dirac delta function is defined such that it is zero-v alued everywhere except 0, y et integrates to 1.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deeplearning #neuralnetworks
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : logistic sigmoid σ(x) = 1/(1 + exp(− x))
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deeplearning #neuralnetworks
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deeplearning #neuralnetworks
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y x + − x = x , it is also p ossible to reco v er x using the same relationship b et w een and ζ (x), ζ ( −x)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deeplearning #neuralnetworks
Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, differen tiable transformation. One migh t exp ect that p y ( y ) = p x ( g − 1 ( y )) . This is actually not the case.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




#deeplearning #neuralnetworks
In order to satisfy all three of these prop erties, w e define the self-information of an event x = x to be I x = −log P(x)
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 1484231150860

Tags
#biochem #biology #cell
Question
most proteins are composed of a series of protein domains, in which different regions of the polypeptide chain fold independently to form compact structures. Such multidomain proteins are believed to have originated from the [process] , cre- ating a new gene.
Answer
accidental joining of the DNA sequences that encode each domain

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
dy>most proteins are composed of a series of protein domains, in which different regions of the polypeptide chain fold independently to form compact structures. Such multidomain proteins are believed to have originated from the accidental joining of the DNA sequences that encode each domain, cre- ating a new gene.<body><html>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1484315036940

Tags
#deeplearning #neuralnetworks
Question
A vector x and a vector y are [...] to each other if xTy = 0 .
Answer
orthogonal

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A vector x and a vector y are orthogonal to each other if x y = 0 .

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1484402593036

Tags
#bayes #programming #r #statistics
Question
In other words, the normalizer for the beta distribution is the [equation]
Answer
beta function \(B(a,b) = \int d\theta \space \theta^{a-1}(1-\theta)^{b-1}\)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In other words, the normalizer for the beta distribution is the beta function \(B(a,b) = \int d\theta \space \theta^{a-1}(1-\theta)^{b-1}\)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1484429856012

Tags
#bayes #programming #r #statistics
Question
The standard deviation of the beta distribution is [...] . Notice that the standard deviation gets smaller when the concentration κ = a + b gets larger.
Answer
\(\sqrt{μ(1 − μ)/(a + b +1)}\)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The standard deviation of the beta distribution is μ(1−μ)/(a+b+1)−−−−−−−−−−−−−−−−√μ(1−μ)/(a+b+1). Notice that the standard deviation gets smaller when the concentration κ = a + b gets larger.

Original toplevel document (pdf)

cannot see any pdfs







Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

How to Make Your Own Smart Drugs
tion. Research has also determined that this supplement can provide significant memory improvements in Alzheimer’s and vascular dementia patients. There are also generous amounts in the adaptogenic herb complex TianChi. 2. Bacopa Monnieri <span>Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety. Evidence suggests that this natural nootropic is effective at improving memory and hand-eye coordination. There have also been some studies that link Bacopa with a reduction in anxiety,




Flashcard 1486207978764

Question
Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety.
Answer
[default - edit me]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety.

Original toplevel document

How to Make Your Own Smart Drugs
tion. Research has also determined that this supplement can provide significant memory improvements in Alzheimer’s and vascular dementia patients. There are also generous amounts in the adaptogenic herb complex TianChi. 2. Bacopa Monnieri <span>Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety. Evidence suggests that this natural nootropic is effective at improving memory and hand-eye coordination. There have also been some studies that link Bacopa with a reduction in anxiety,







Flashcard 1486210600204

Question
For example, Piracetam was one of the first lab created compounds specifically designed to enhance cognitive performance, and although it is a synthesized chemical (with chemical name 2-oxo-1-pyrrolidine acetamide) it is generally regarded as being safe.
Answer
[default - edit me]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
How to Make Your Own Smart Drugs
ts together to achieve some pretty cool results. Happy blendin'. ————————- Synthetic vs. Natural Nootropics There are numerous synthetic smart drugs that are utilized nowadays by people from all walks of life, from CEO's to soccer moms. <span>For example, Piracetam was one of the first lab created compounds specifically designed to enhance cognitive performance, and although it is a synthesized chemical (with chemical name 2-oxo-1-pyrrolidine acetamide) it is generally regarded as being safe. The vast majority of people can take this supplement without needing to worry about suffering from any major side effects. However, there are also many notable natural and herbal nootro







Lion’s Mane – 500 mg, once per dayGingko Biloba – 240 mg, once per dayBacopa Monnieri – 100 mg, twice per day
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

How to Make Your Own Smart Drugs
nent that you would take individually is not typically a wise choice due to the way that each supplement blends together. For this stack, most folks use the following daily combination, and you can find most of this stuff in bulk on Amazon. <span>Lion’s Mane – 500 mg, once per day Gingko Biloba – 240 mg, once per day Bacopa Monnieri – 100 mg, twice per day After 12 weeks, if you are not experiencing positive results, you may need to adjust the dosages in your stack. Start with small increments such as increasing each dose of the Bacopa




Flashcard 1486213483788

Question
Lion’s Mane – 500 mg, once per dayGingko Biloba – 240 mg, once per day[...] – 100 mg, twice per day
Answer
Bacopa Monnieri

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Lion’s Mane – 500 mg, once per dayGingko Biloba – 240 mg, once per dayBacopa Monnieri – 100 mg, twice per day

Original toplevel document

How to Make Your Own Smart Drugs
nent that you would take individually is not typically a wise choice due to the way that each supplement blends together. For this stack, most folks use the following daily combination, and you can find most of this stuff in bulk on Amazon. <span>Lion’s Mane – 500 mg, once per day Gingko Biloba – 240 mg, once per day Bacopa Monnieri – 100 mg, twice per day After 12 weeks, if you are not experiencing positive results, you may need to adjust the dosages in your stack. Start with small increments such as increasing each dose of the Bacopa







Flashcard 1486214532364

Question
Amazon.
Answer
Amazon.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill
How to Make Your Own Smart Drugs
each component that you would take individually is not typically a wise choice due to the way that each supplement blends together. For this stack, most folks use the following daily combination, and you can find most of this stuff in bulk on <span>Amazon. Lion’s Mane – 500 mg, once per day Gingko Biloba – 240 mg, once per day Bacopa Monnieri – 100 mg, twice per day After 12 weeks, if you are not experiencing positive results, you may







Flashcard 1486217415948

Tags
#deeplearning #neuralnetworks
Question
Other measures such as [...] normalize the con tribution of each v ariable in order to measure only how m uc h the v ariables are related, rather than also b eing affected b y the scale of the separate v ariables
Answer
correlation

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Other measures such as correlation normalize the con tribution of each v ariable in order to measure only how m uc h the v ariables are related, rather than also b eing affected b y the scale of the separate v ariables</s

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486218988812

Tags
#deeplearning #neuralnetworks
Question
Other measures such as correlation [...] in order to measure only how m uc h the v ariables are related, rather than also b eing affected b y the scale of the separate v ariables
Answer
normalize the con tribution of each v ariable

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Other measures such as correlation normalize the con tribution of each v ariable in order to measure only how m uc h the v ariables are related, rather than also b eing affected b y the scale of the separate v ariables

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486220561676

Tags
#deeplearning #neuralnetworks
Question
Other measures such as correlation normalize the con tribution of each v ariable in order to [...]
Answer
measure only how m uc h the v ariables are related, rather than also b eing affected b y the scale of the separate v ariables

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Other measures such as correlation normalize the con tribution of each v ariable in order to measure only how m uc h the v ariables are related, rather than also b eing affected b y the scale of the separate v ariables

Original toplevel document (pdf)

cannot see any pdfs







regional economic forum
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




APEC consists of 21 member-economies
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 1486269844748

Tags
#has-images


Question
Answer

Architettura a "bus", struttura più semplice...


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486273252620

Tags
#has-images


Question

Answer
Memoria distribuita dove ogni processo ha accesso alla memoria..

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486276660492

Tags
#has-images


Question

Answer
Viene utilizzato uno switch come un interrutore. Se un processo vuole accedere in memoria deve richiederlo allo switch.
Sarà compito dello switch smistarlo alla memoria richiesta. E' migliore della struttura a memoria distribuita, ma bisogna tenere presente il tempo di switch.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486280068364

Tags
#has-images


Question

Answer
Accesso a switch ma con collegamento diretto alla propria Ram.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486283476236

Tags
#has-images


Question


Questi modelli sono utilizzabili solo a patto che [...]

Answer
il tempo di switch sia basso

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486285311244

Tags
#has-images
Question
Crossbar switch
Answer

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486291078412

Tags
#has-images


Question
Rete Omega
Answer


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486292389132

Tags
#has-images
Question
CrossPoint switch


Answer



Non possono essere chiusi due switch sulla stessa riga o colonna (come le torri negli scacchi)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 1486303137036

Tags
#has-images


Question
Mesh
Answer


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486306544908

Tags
#has-images


Question
Hypercube
Answer


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1486383090956

Tags
#exam-fails #fra-introduction
Question
If a company's operating cycle lasts 2 years, which timeframe should be used to categorize current assets?
Answer
C. Two years

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 1486384925964

Tags
#exam-fails #fra-introduction
Question
At the beginning of the year, Chock Company had $50,000 in assets and $20,000 in liabilities. At the end of the year, the company had $80,000 in assets and $40,000 in liabilities. If, during the year, no investments were made in the business and dividends of $2,000 were declared and paid during the year, net income for the year must have been ______.

A. $8,000

B. $10,000

C. $12,000
Answer
Correct Answer: C

Net income for the year was $12,000. Net income or loss (revenues less expenses) and dividends affect stockholders' equity. Stockholders' equity would have been $30,000 at the beginning of the year ($50,000 - $20,000) and $40,000 at the end of the year ($80,000 - $40,000). The change in stockholders' equity over the year was a $10,000 increase. If $2,000 was declared and paid in dividends, net income must have been $12,000 ($30,000 + $12,000 - $2,000 = $40,000).

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill






Flashcard 1486390955276

Tags
#deeplearning #neuralnetworks
Question
The [...] matrix of a random vector x ∈ R n is an n n × matrix, suc h that Co v( ) x i,j = Co v( x i , x j ) . (3.14) The diagonal elemen ts of the co v ariance give the v ariance: Co v( x i , x i ) = V ar( x i )
Answer
co v ariance

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Co v( ) x i,j = Co v( x i , x j ) . (3.14) The diagonal elemen ts of the co v ariance give the v ariance: Co v( x i , x

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486393314572

Tags
#deeplearning #neuralnetworks
Question
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that [...] . (3.14) The diagonal elements of the covariance give the variance: Cov( x i , x i ) = Var( x i )
Answer
Cov(x)i,j = Cov( x i , x j )

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Cov(x) i,j = Cov( x i , x j ) . (3.14) The diagonal elements of the covariance give the variance: Cov( x i , x i ) = Var( x i )

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486394887436

Tags
#deeplearning #neuralnetworks
Question
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Cov(x)i,j = Cov( x i , x j ) . (3.14) The diagonal elements of the covariance give [...]
Answer
the variance: Cov( x i , x i ) = Var( x i )

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Cov(x) i,j = Cov( x i , x j ) . (3.14) The diagonal elements of the covariance give the variance: Cov( x i , x i ) = Var( x i )

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486396460300

Tags
#deeplearning #neuralnetworks
Question
The Bernoulli distribution is [definition]. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P φ ( = 1) = x (3.16) P φ ( = 0) = 1 x − (3.17) P x φ ( = x ) = x (1 ) − φ 1 − x (3.18) E x [ ] = x φ (3.19) V ar x ( ) = (1 ) x φ − φ
Answer
a distribution ov er a single binary random v ariable

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The Bernoulli distribution is a distribution ov er a single binary random v ariable. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P φ ( = 1) = x (3.16) P

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486398033164

Tags
#deeplearning #neuralnetworks
Question
The Bernoulli distribution is a distribution ov er a single binary random v ariable. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the [...] It has the following prop erties: P φ ( = 1) = x (3.16) P φ ( = 0) = 1 x − (3.17) P x φ ( = x ) = x (1 ) − φ 1 − x (3.18) E x [ ] = x φ (3.19) V ar x ( ) = (1 ) x φ − φ
Answer
probability of the random v ariable b eing equal to 1.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The Bernoulli distribution is a distribution ov er a single binary random v ariable. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P φ ( = 1) = x (3.16) P φ ( = 0) = 1 x − (3.17) P x φ ( = x ) = x (1 ) − φ 1 − x (3.18) E x [ ] = x φ (3.19) V ar x ( ) = (1 ) x φ − φ</spa

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486400392460

Tags
#deeplearning #neuralnetworks
Question
What is the expectation of the bionomial distribution?
Answer
E x [x] = φ

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P (x = 1) = φ P (x = 0) = 1-φ P (x = x ) = φ x (1 − φ) 1 − x <span>E x [x] = φ V ar (x) = φ (1− φ)<span><body><html>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486402227468

Tags
#deeplearning #neuralnetworks
Question
What is the variance of the bionomial distribution?
Answer
V ar (x) = φ (1− φ)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ingle parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P (x = 1) = φ P (x = 0) = 1-φ P (x = x ) = φ x (1 − φ) 1 − x E x [x] = φ <span>V ar (x) = φ (1− φ)<span><body><html>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486404062476

Tags
#deeplearning #neuralnetworks
Question
In some cases, we wish to [...] This can b e accomplished b y defining a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is defined such that it is zero-v alued everywhere except 0, y et integrates to 1.
Answer
sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y defining a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is defined such that it is zero-v alued eve

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486405635340

Tags
#deeplearning #neuralnetworks
Question
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y defining a PDF using the [...] function, : p(x) = δ(x-µ)
Answer
Dirac delta

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y defining a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is defined such that it is zero-v alued everywhere except 0, y et integrates to 1.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486407208204

Tags
#deeplearning #neuralnetworks
Question
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y defining a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is defined such that [...]
Answer
it is zero-v alued everywhere except 0, y et integrates to 1.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ll of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y defining a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is defined such that <span>it is zero-v alued everywhere except 0, y et integrates to 1.<span><body><html>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486409567500

Tags
#deeplearning #neuralnetworks
Question
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y defining a PDF using the Dirac delta function, : [...] The Dirac delta function is defined such that it is zero-v alued everywhere except 0, y et integrates to 1.
Answer
p(x) = δ(x-µ)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y defining a PDF using the Dirac delta function, : δ(x) = δ(x-µ) The Dirac delta function is defined such that it is zero-v alued everywhere except 0, y et integrates to 1.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486412713228

Tags
#deeplearning #neuralnetworks
Question
mathematical ob ject called a [...] that is defined in terms of its prop erties when integrated
Answer
generalized function

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
mathematical ob ject called a generalized function that is defined in terms of its prop erties when integrated

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486414286092

Tags
#deeplearning #neuralnetworks
Question
Another imp ortant p ersp ective on the [...] distribution is that it is the probabilit y density that maximizes the likelihoo d of the training data
Answer
empirical

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Another imp ortant p ersp ective on the empirical distribution is that it is the probabilit y density that maximizes the likelihoo d of the training data

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486415858956

Tags
#deeplearning #neuralnetworks
Question
Another imp ortant p ersp ective on the empirical distribution is that it is the probabilit y density that [...]
Answer
maximizes the likelihoo d of the training data

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Another imp ortant p ersp ective on the empirical distribution is that it is the probabilit y density that maximizes the likelihoo d of the training data

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486417431820

Tags
#deeplearning #neuralnetworks
Question
A [...] v ariable is a random v ariable that w e cannot observe directly .
Answer
latent

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A latent v ariable is a random v ariable that w e cannot observe directly .

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486419004684

Tags
#deeplearning #neuralnetworks
Question
A latent v ariable is a [...]
Answer
random v ariable that w e cannot observe directly .

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A latent v ariable is a random v ariable that w e cannot observe directly .

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486421363980

Tags
#deeplearning #neuralnetworks
Question
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : logistic sigmoid [...]
Answer
σ(x) = 1/(1 + exp(− x))

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : logistic sigmoid σ(x) = 1/(1 + exp(− x))

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486422936844

Tags
#deeplearning #neuralnetworks
Question
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : [...] σ(x) = 1/(1 + exp(− x))
Answer
logistic sigmoid

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : logistic sigmoid σ(x) = 1/(1 + exp(− x))

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486424509708

Tags
#deeplearning #neuralnetworks
Question
[...] matrix, meaning it can control the v ariance separately along each axis-aligned direction.
Answer
diagonal co v ariance

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
diagonal co v ariance matrix, meaning it can control the v ariance separately along each axis-aligned direction.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486426082572

Tags
#deeplearning #neuralnetworks
Question
diagonal co v ariance matrix, meaning it can [...]
Answer
control the v ariance separately along each axis-aligned direction.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
diagonal co v ariance matrix, meaning it can control the v ariance separately along each axis-aligned direction.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486428441868

Tags
#deeplearning #neuralnetworks
Question
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): [...] The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )
Answer
ζ(x) = log (1 + exp(x))

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486430014732

Tags
#deeplearning #neuralnetworks
Question
Another commonly encountered function is the [...] ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x))
Answer
softplus function

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )</

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486431587596

Tags
#deeplearning #neuralnetworks
Question
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the [...] parameter of a normal distribution b ecause its range is (0 , ∞ )
Answer
β or σ

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486433160460

Tags
#deeplearning #neuralnetworks
Question
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause [...]
Answer
its range is (0 , ∞ )

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
<head>Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )<html>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486435519756

Tags
#deeplearning #neuralnetworks
Question
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y [...] , it is also p ossible to reco v er x using the same relationship b et w een and ζ (x), ζ ( −x)
Answer
x + − x = x

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y x + − x − = x , it is also p ossible to reco v er x using the same relationship b et w een and ζ (x), ζ ( −x)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486437092620

Tags
#deeplearning #neuralnetworks
Question
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y x + − x = x , it is also p ossible to reco v er x using the same relationship b et w een and [...]
Answer
ζ (x), ζ ( −x)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y x + − x − = x , it is also p ossible to reco v er x using the same relationship b et w een and ζ (x), ζ ( −x)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486439451916

Tags
#deeplearning #neuralnetworks
Question
Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, differen tiable transformation. One migh t exp ect that [...] This is actually not the case.
Answer
p y ( y ) = p x ( g − 1 ( y )) .

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, differen tiable transformation. One migh t exp ect that p y ( y ) = p x ( g − 1 ( y )) . This is actually not the case.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486441024780

Tags
#deeplearning #neuralnetworks
Question
Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, differen tiable transformation. One migh t exp ect that p y ( y ) = p x ( g − 1 ( y )) . This is [...]
Answer
actually not the case.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
d>Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, differen tiable transformation. One migh t exp ect that p y ( y ) = p x ( g − 1 ( y )) . This is actually not the case.<html>

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486442597644

Tags
#deeplearning #neuralnetworks
Question
The basic intuition b ehind information theory is [...]
Answer
that learning that an unlik ely ev en t has occurred is more informative than learning that a lik ely ev ent has o ccurred

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The basic intuition b ehind information theory is that learning that an unlik ely ev en t has occurred is more informative than learning that a lik ely ev ent has o ccurred

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486444956940

Tags
#deeplearning #neuralnetworks
Question
In order to satisfy all three of these prop erties, w e define the [...] of an event x = x to be I x = −log P(x)
Answer
self-information

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In order to satisfy all three of these prop erties, w e define the self-information of an event x = x to be I x = −log P(x)

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1486446529804

Tags
#deeplearning #neuralnetworks
Question
In order to satisfy all three of these prop erties, w e define the self-information of an event x = x to be [...]
Answer
I x = −log P(x)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In order to satisfy all three of these prop erties, w e define the self-information of an event x = x to be I x = −log P(x)

Original toplevel document (pdf)

cannot see any pdfs