# on 07-Mar-2017 (Tue)

#### Flashcard 1429070548236

Tags
#sister-miriam-joseph #trivium
Question
Valuable goods are those which are not only desired for their own sake but which [...] For instance, [...] are valuable goods.
increase the intrinsic worth of their possessor.

knowledge, virtue, and health.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Valuable goods are those which are not only desired for their own sake but which increase the intrinsic worth of their possessor. For instance, knowledge, virtue, and health are valuable goods.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1442931412236

Tags
#estructura-interna-de-las-palabras #formantes-morfológicos #gramatica-española #la #morfología #tulio
Question

Para establecer la estructura interna de las palabras, la morfología se ocupa de:

a. identificar [...]

b. determinar las posibles variaciones que éstos presenten;

d. reconocer la organización de las palabras.

los formantes morfológicos;

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Para establecer la estructura interna de las palabras, la morfología se ocupa de: a . identificar los formantes morfológicos; b . determinar las posibles variaciones que éstos presenten; c . describir los procesos involucrados; d . reconocer la organización de las palabras

#### Original toplevel document

La estructura interna de la palabra

#### Annotation 1481969372428

 #deeplearning #neuralnetworks The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Cov(x)i,j = Cov( x i , x j ) . (3.14) The diagonal elements of the covariance give the variance: Cov( x i , x i ) = Var( x i )

#### pdf

cannot see any pdfs

#### Annotation 1481970945292

 #deeplearning #neuralnetworks The Bernoulli distribution is a distribution ov er a single binary random v ariable. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P (x = 1) = φ P (x = 0) = 1-φ P (x = x ) = φ x (1 − φ) 1 − x E x [x] = φ V ar (x) = φ (1− φ)

#### pdf

cannot see any pdfs

#### Annotation 1481990343948

 #deeplearning #neuralnetworks In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y deﬁning a PDF using the Dirac delta function, : p(x) = δ(x-µ) The Dirac delta function is deﬁned such that it is zero-v alued everywhere except 0, y et integrates to 1.

#### pdf

cannot see any pdfs

#### Annotation 1482008431884

 #deeplearning #neuralnetworks Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : logistic sigmoid σ(x) = 1/(1 + exp(− x))

#### pdf

cannot see any pdfs

#### Annotation 1482018131212

 #deeplearning #neuralnetworks Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )

#### pdf

cannot see any pdfs

#### Annotation 1482025995532

 #deeplearning #neuralnetworks Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y x + − x − = x , it is also p ossible to reco v er x using the same relationship b et w een and ζ (x), ζ ( −x)

#### pdf

cannot see any pdfs

#### Annotation 1482032549132

 #deeplearning #neuralnetworks Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, diﬀeren tiable transformation. One migh t exp ect that p y ( y ) = p x ( g − 1 ( y )) . This is actually not the case.

#### pdf

cannot see any pdfs

#### Annotation 1482037267724

 #deeplearning #neuralnetworks In order to satisfy all three of these prop erties, w e deﬁne the self-information of an event x = x to be I x = −log P(x)

#### pdf

cannot see any pdfs

#### Flashcard 1484231150860

Tags
#biochem #biology #cell
Question
most proteins are composed of a series of protein domains, in which different regions of the polypeptide chain fold independently to form compact structures. Such multidomain proteins are believed to have originated from the [process] , cre- ating a new gene.
accidental joining of the DNA sequences that encode each domain

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
dy>most proteins are composed of a series of protein domains, in which different regions of the polypeptide chain fold independently to form compact structures. Such multidomain proteins are believed to have originated from the accidental joining of the DNA sequences that encode each domain, cre- ating a new gene.<body><html>

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1484315036940

Tags
#deeplearning #neuralnetworks
Question
A vector x and a vector y are [...] to each other if xTy = 0 .
orthogonal

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A vector x and a vector y are orthogonal to each other if x y = 0 .

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1484402593036

Tags
#bayes #programming #r #statistics
Question
In other words, the normalizer for the beta distribution is the [equation]
beta function $$B(a,b) = \int d\theta \space \theta^{a-1}(1-\theta)^{b-1}$$

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In other words, the normalizer for the beta distribution is the beta function $$B(a,b) = \int d\theta \space \theta^{a-1}(1-\theta)^{b-1}$$

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1484429856012

Tags
#bayes #programming #r #statistics
Question
The standard deviation of the beta distribution is [...] . Notice that the standard deviation gets smaller when the concentration κ = a + b gets larger.
$$\sqrt{μ(1 − μ)/(a + b +1)}$$

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The standard deviation of the beta distribution is μ(1−μ)/(a+b+1)−−−−−−−−−−−−−−−−√μ(1−μ)/(a+b+1). Notice that the standard deviation gets smaller when the concentration κ = a + b gets larger.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Annotation 1486204570892

 Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety.

How to Make Your Own Smart Drugs
tion. Research has also determined that this supplement can provide significant memory improvements in Alzheimer’s and vascular dementia patients. There are also generous amounts in the adaptogenic herb complex TianChi. 2. Bacopa Monnieri <span>Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety. Evidence suggests that this natural nootropic is effective at improving memory and hand-eye coordination. There have also been some studies that link Bacopa with a reduction in anxiety,

#### Flashcard 1486207978764

Question
Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety.
[default - edit me]

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety.

#### Original toplevel document

How to Make Your Own Smart Drugs
tion. Research has also determined that this supplement can provide significant memory improvements in Alzheimer’s and vascular dementia patients. There are also generous amounts in the adaptogenic herb complex TianChi. 2. Bacopa Monnieri <span>Bacopa Monnieri is an extract from the Brahmi plant. According to WebMD, Bacopa is used for a wide variety of purposes, including as a supplemental Alzheimer’s treatment and way to reduce anxiety. Evidence suggests that this natural nootropic is effective at improving memory and hand-eye coordination. There have also been some studies that link Bacopa with a reduction in anxiety,

#### Flashcard 1486210600204

Question
For example, Piracetam was one of the first lab created compounds specifically designed to enhance cognitive performance, and although it is a synthesized chemical (with chemical name 2-oxo-1-pyrrolidine acetamide) it is generally regarded as being safe.
[default - edit me]

status measured difficulty not learned 37% [default] 0
How to Make Your Own Smart Drugs
ts together to achieve some pretty cool results. Happy blendin'. ————————- Synthetic vs. Natural Nootropics There are numerous synthetic smart drugs that are utilized nowadays by people from all walks of life, from CEO's to soccer moms. <span>For example, Piracetam was one of the first lab created compounds specifically designed to enhance cognitive performance, and although it is a synthesized chemical (with chemical name 2-oxo-1-pyrrolidine acetamide) it is generally regarded as being safe. The vast majority of people can take this supplement without needing to worry about suffering from any major side effects. However, there are also many notable natural and herbal nootro

#### Annotation 1486211910924

 Lion’s Mane – 500 mg, once per dayGingko Biloba – 240 mg, once per dayBacopa Monnieri – 100 mg, twice per day

How to Make Your Own Smart Drugs
nent that you would take individually is not typically a wise choice due to the way that each supplement blends together. For this stack, most folks use the following daily combination, and you can find most of this stuff in bulk on Amazon. <span>Lion’s Mane – 500 mg, once per day Gingko Biloba – 240 mg, once per day Bacopa Monnieri – 100 mg, twice per day After 12 weeks, if you are not experiencing positive results, you may need to adjust the dosages in your stack. Start with small increments such as increasing each dose of the Bacopa

#### Flashcard 1486213483788

Question
Lion’s Mane – 500 mg, once per dayGingko Biloba – 240 mg, once per day[...] – 100 mg, twice per day
Bacopa Monnieri

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Lion’s Mane – 500 mg, once per dayGingko Biloba – 240 mg, once per dayBacopa Monnieri – 100 mg, twice per day

#### Original toplevel document

How to Make Your Own Smart Drugs
nent that you would take individually is not typically a wise choice due to the way that each supplement blends together. For this stack, most folks use the following daily combination, and you can find most of this stuff in bulk on Amazon. <span>Lion’s Mane – 500 mg, once per day Gingko Biloba – 240 mg, once per day Bacopa Monnieri – 100 mg, twice per day After 12 weeks, if you are not experiencing positive results, you may need to adjust the dosages in your stack. Start with small increments such as increasing each dose of the Bacopa

#### Flashcard 1486214532364

Question
Amazon.
Amazon.

status measured difficulty not learned 37% [default] 0
How to Make Your Own Smart Drugs
each component that you would take individually is not typically a wise choice due to the way that each supplement blends together. For this stack, most folks use the following daily combination, and you can find most of this stuff in bulk on <span>Amazon. Lion’s Mane – 500 mg, once per day Gingko Biloba – 240 mg, once per day Bacopa Monnieri – 100 mg, twice per day After 12 weeks, if you are not experiencing positive results, you may

#### Flashcard 1486217415948

Tags
#deeplearning #neuralnetworks
Question
Other measures such as [...] normalize the con tribution of each v ariable in order to measure only how m uc h the v ariables are related, rather than also b eing aﬀected b y the scale of the separate v ariables
correlation

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Other measures such as correlation normalize the con tribution of each v ariable in order to measure only how m uc h the v ariables are related, rather than also b eing aﬀected b y the scale of the separate v ariables</s

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486218988812

Tags
#deeplearning #neuralnetworks
Question
Other measures such as correlation [...] in order to measure only how m uc h the v ariables are related, rather than also b eing aﬀected b y the scale of the separate v ariables
normalize the con tribution of each v ariable

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Other measures such as correlation normalize the con tribution of each v ariable in order to measure only how m uc h the v ariables are related, rather than also b eing aﬀected b y the scale of the separate v ariables

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486220561676

Tags
#deeplearning #neuralnetworks
Question
Other measures such as correlation normalize the con tribution of each v ariable in order to [...]
measure only how m uc h the v ariables are related, rather than also b eing aﬀected b y the scale of the separate v ariables

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Other measures such as correlation normalize the con tribution of each v ariable in order to measure only how m uc h the v ariables are related, rather than also b eing aﬀected b y the scale of the separate v ariables

#### Original toplevel document (pdf)

cannot see any pdfs

#### Annotation 1486242057484

 regional economic forum

#### pdf

cannot see any pdfs

#### Annotation 1486243630348

 APEC consists of 21 member-economies

#### pdf

cannot see any pdfs

#### Flashcard 1486269844748

Tags
#has-images

Question

Architettura a "bus", struttura più semplice...

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486273252620

Tags
#has-images

Question

Memoria distribuita dove ogni processo ha accesso alla memoria..

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486276660492

Tags
#has-images

Question

Viene utilizzato uno switch come un interrutore. Se un processo vuole accedere in memoria deve richiederlo allo switch.
Sarà compito dello switch smistarlo alla memoria richiesta. E' migliore della struttura a memoria distribuita, ma bisogna tenere presente il tempo di switch.

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486280068364

Tags
#has-images

Question

Accesso a switch ma con collegamento diretto alla propria Ram.

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486283476236

Tags
#has-images

Question

Questi modelli sono utilizzabili solo a patto che [...]

il tempo di switch sia basso

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486285311244

Tags
#has-images
Question
Crossbar switch

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486291078412

Tags
#has-images

Question
Rete Omega

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486292389132

Tags
#has-images
Question
CrossPoint switch

Non possono essere chiusi due switch sulla stessa riga o colonna (come le torri negli scacchi)

status measured difficulty not learned 37% [default] 0

#### Flashcard 1486303137036

Tags
#has-images

Question
Mesh

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486306544908

Tags
#has-images

Question
Hypercube

status measured difficulty not learned 37% [default] 0

#### pdf

cannot see any pdfs

#### Flashcard 1486383090956

Tags
#exam-fails #fra-introduction
Question
If a company's operating cycle lasts 2 years, which timeframe should be used to categorize current assets?
C. Two years

status measured difficulty not learned 37% [default] 0

#### Flashcard 1486384925964

Tags
#exam-fails #fra-introduction
Question
At the beginning of the year, Chock Company had $50,000 in assets and$20,000 in liabilities. At the end of the year, the company had $80,000 in assets and$40,000 in liabilities. If, during the year, no investments were made in the business and dividends of $2,000 were declared and paid during the year, net income for the year must have been ______. A.$8,000

B. $10,000 C.$12,000

Net income for the year was $12,000. Net income or loss (revenues less expenses) and dividends affect stockholders' equity. Stockholders' equity would have been$30,000 at the beginning of the year ($50,000 -$20,000) and $40,000 at the end of the year ($80,000 - $40,000). The change in stockholders' equity over the year was a$10,000 increase. If $2,000 was declared and paid in dividends, net income must have been$12,000 ($30,000 +$12,000 - $2,000 =$40,000).

status measured difficulty not learned 37% [default] 0

#### Flashcard 1486390955276

Tags
#deeplearning #neuralnetworks
Question
The [...] matrix of a random vector x ∈ R n is an n n × matrix, suc h that Co v( ) x i,j = Co v( x i , x j ) . (3.14) The diagonal elemen ts of the co v ariance give the v ariance: Co v( x i , x i ) = V ar( x i )
co v ariance

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Co v( ) x i,j = Co v( x i , x j ) . (3.14) The diagonal elemen ts of the co v ariance give the v ariance: Co v( x i , x

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486393314572

Tags
#deeplearning #neuralnetworks
Question
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that [...] . (3.14) The diagonal elements of the covariance give the variance: Cov( x i , x i ) = Var( x i )
Cov(x)i,j = Cov( x i , x j )

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Cov(x) i,j = Cov( x i , x j ) . (3.14) The diagonal elements of the covariance give the variance: Cov( x i , x i ) = Var( x i )

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486394887436

Tags
#deeplearning #neuralnetworks
Question
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Cov(x)i,j = Cov( x i , x j ) . (3.14) The diagonal elements of the covariance give [...]
the variance: Cov( x i , x i ) = Var( x i )

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The co v ariance matrix of a random vector x ∈ R n is an n n × matrix, suc h that Cov(x) i,j = Cov( x i , x j ) . (3.14) The diagonal elements of the covariance give the variance: Cov( x i , x i ) = Var( x i )

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486396460300

Tags
#deeplearning #neuralnetworks
Question
The Bernoulli distribution is [definition]. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P φ ( = 1) = x (3.16) P φ ( = 0) = 1 x − (3.17) P x φ ( = x ) = x (1 ) − φ 1 − x (3.18) E x [ ] = x φ (3.19) V ar x ( ) = (1 ) x φ − φ
a distribution ov er a single binary random v ariable

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The Bernoulli distribution is a distribution ov er a single binary random v ariable. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P φ ( = 1) = x (3.16) P

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486398033164

Tags
#deeplearning #neuralnetworks
Question
The Bernoulli distribution is a distribution ov er a single binary random v ariable. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the [...] It has the following prop erties: P φ ( = 1) = x (3.16) P φ ( = 0) = 1 x − (3.17) P x φ ( = x ) = x (1 ) − φ 1 − x (3.18) E x [ ] = x φ (3.19) V ar x ( ) = (1 ) x φ − φ
probability of the random v ariable b eing equal to 1.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The Bernoulli distribution is a distribution ov er a single binary random v ariable. It is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P φ ( = 1) = x (3.16) P φ ( = 0) = 1 x − (3.17) P x φ ( = x ) = x (1 ) − φ 1 − x (3.18) E x [ ] = x φ (3.19) V ar x ( ) = (1 ) x φ − φ</spa

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486400392460

Tags
#deeplearning #neuralnetworks
Question
What is the expectation of the bionomial distribution?
E x [x] = φ

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
is controlled by a single parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P (x = 1) = φ P (x = 0) = 1-φ P (x = x ) = φ x (1 − φ) 1 − x <span>E x [x] = φ V ar (x) = φ (1− φ)<span><body><html>

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486402227468

Tags
#deeplearning #neuralnetworks
Question
What is the variance of the bionomial distribution?
V ar (x) = φ (1− φ)

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
ingle parameter φ ∈ [0 , 1] , whic h gives the probability of the random v ariable b eing equal to 1. It has the following prop erties: P (x = 1) = φ P (x = 0) = 1-φ P (x = x ) = φ x (1 − φ) 1 − x E x [x] = φ <span>V ar (x) = φ (1− φ)<span><body><html>

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486404062476

Tags
#deeplearning #neuralnetworks
Question
In some cases, we wish to [...] This can b e accomplished b y deﬁning a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is deﬁned such that it is zero-v alued everywhere except 0, y et integrates to 1.
sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y deﬁning a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is deﬁned such that it is zero-v alued eve

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486405635340

Tags
#deeplearning #neuralnetworks
Question
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y deﬁning a PDF using the [...] function, : p(x) = δ(x-µ)
Dirac delta

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y deﬁning a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is deﬁned such that it is zero-v alued everywhere except 0, y et integrates to 1.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486407208204

Tags
#deeplearning #neuralnetworks
Question
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y deﬁning a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is deﬁned such that [...]
it is zero-v alued everywhere except 0, y et integrates to 1.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
ll of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y deﬁning a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is deﬁned such that <span>it is zero-v alued everywhere except 0, y et integrates to 1.<span><body><html>

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486409567500

Tags
#deeplearning #neuralnetworks
Question
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y deﬁning a PDF using the Dirac delta function, : [...] The Dirac delta function is deﬁned such that it is zero-v alued everywhere except 0, y et integrates to 1.
p(x) = δ(x-µ)

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In some cases, we wish to sp ecify that all of the mass in a probabilit y distribution clusters around a single p oin t. This can b e accomplished b y deﬁning a PDF using the Dirac delta function, : δ(x) = δ(x-µ) The Dirac delta function is deﬁned such that it is zero-v alued everywhere except 0, y et integrates to 1.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486412713228

Tags
#deeplearning #neuralnetworks
Question
mathematical ob ject called a [...] that is deﬁned in terms of its prop erties when integrated
generalized function

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
mathematical ob ject called a generalized function that is deﬁned in terms of its prop erties when integrated

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486414286092

Tags
#deeplearning #neuralnetworks
Question
Another imp ortant p ersp ective on the [...] distribution is that it is the probabilit y density that maximizes the likelihoo d of the training data
empirical

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Another imp ortant p ersp ective on the empirical distribution is that it is the probabilit y density that maximizes the likelihoo d of the training data

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486415858956

Tags
#deeplearning #neuralnetworks
Question
Another imp ortant p ersp ective on the empirical distribution is that it is the probabilit y density that [...]
maximizes the likelihoo d of the training data

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Another imp ortant p ersp ective on the empirical distribution is that it is the probabilit y density that maximizes the likelihoo d of the training data

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486417431820

Tags
#deeplearning #neuralnetworks
Question
A [...] v ariable is a random v ariable that w e cannot observe directly .
latent

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A latent v ariable is a random v ariable that w e cannot observe directly .

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486419004684

Tags
#deeplearning #neuralnetworks
Question
A latent v ariable is a [...]
random v ariable that w e cannot observe directly .

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
A latent v ariable is a random v ariable that w e cannot observe directly .

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486421363980

Tags
#deeplearning #neuralnetworks
Question
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : logistic sigmoid [...]
σ(x) = 1/(1 + exp(− x))

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : logistic sigmoid σ(x) = 1/(1 + exp(− x))

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486422936844

Tags
#deeplearning #neuralnetworks
Question
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : [...] σ(x) = 1/(1 + exp(− x))
logistic sigmoid

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Certain functions arise often while working with probability distributions, especially the probabilit y distributions used in deep learning models. One of these functions is the : logistic sigmoid σ(x) = 1/(1 + exp(− x))

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486424509708

Tags
#deeplearning #neuralnetworks
Question
[...] matrix, meaning it can control the v ariance separately along each axis-aligned direction.
diagonal co v ariance

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
diagonal co v ariance matrix, meaning it can control the v ariance separately along each axis-aligned direction.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486426082572

Tags
#deeplearning #neuralnetworks
Question
diagonal co v ariance matrix, meaning it can [...]
control the v ariance separately along each axis-aligned direction.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
diagonal co v ariance matrix, meaning it can control the v ariance separately along each axis-aligned direction.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486428441868

Tags
#deeplearning #neuralnetworks
Question
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): [...] The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )
ζ(x) = log (1 + exp(x))

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486430014732

Tags
#deeplearning #neuralnetworks
Question
Another commonly encountered function is the [...] ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x))
softplus function

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )</

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486431587596

Tags
#deeplearning #neuralnetworks
Question
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the [...] parameter of a normal distribution b ecause its range is (0 , ∞ )
β or σ

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486433160460

Tags
#deeplearning #neuralnetworks
Question
Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause [...]
its range is (0 , ∞ )

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
<head>Another commonly encountered function is the softplus function ( , Dugas et al. 2001 ): ζ(x) = log (1 + exp(x)) The softplus function can be useful for pro ducing the β or σ parameter of a normal distribution b ecause its range is (0 , ∞ )<html>

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486435519756

Tags
#deeplearning #neuralnetworks
Question
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y [...] , it is also p ossible to reco v er x using the same relationship b et w een and ζ (x), ζ ( −x)
x + − x = x

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y x + − x − = x , it is also p ossible to reco v er x using the same relationship b et w een and ζ (x), ζ ( −x)

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486437092620

Tags
#deeplearning #neuralnetworks
Question
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y x + − x = x , it is also p ossible to reco v er x using the same relationship b et w een and [...]
ζ (x), ζ ( −x)

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Just as x can b e recov ered from its p ositive part and negativ e part via the iden tit y x + − x − = x , it is also p ossible to reco v er x using the same relationship b et w een and ζ (x), ζ ( −x)

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486439451916

Tags
#deeplearning #neuralnetworks
Question
Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, diﬀeren tiable transformation. One migh t exp ect that [...] This is actually not the case.
p y ( y ) = p x ( g − 1 ( y )) .

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, diﬀeren tiable transformation. One migh t exp ect that p y ( y ) = p x ( g − 1 ( y )) . This is actually not the case.

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486441024780

Tags
#deeplearning #neuralnetworks
Question
Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, diﬀeren tiable transformation. One migh t exp ect that p y ( y ) = p x ( g − 1 ( y )) . This is [...]
actually not the case.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
d>Supp ose we ha v e t w o random v ariables, x and y , suc h that y = g ( x ) , where g is an inv ertible, con- tin uous, diﬀeren tiable transformation. One migh t exp ect that p y ( y ) = p x ( g − 1 ( y )) . This is actually not the case.<html>

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486442597644

Tags
#deeplearning #neuralnetworks
Question
The basic intuition b ehind information theory is [...]
that learning that an unlik ely ev en t has occurred is more informative than learning that a lik ely ev ent has o ccurred

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The basic intuition b ehind information theory is that learning that an unlik ely ev en t has occurred is more informative than learning that a lik ely ev ent has o ccurred

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486444956940

Tags
#deeplearning #neuralnetworks
Question
In order to satisfy all three of these prop erties, w e deﬁne the [...] of an event x = x to be I x = −log P(x)
self-information

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In order to satisfy all three of these prop erties, w e deﬁne the self-information of an event x = x to be I x = −log P(x)

#### Original toplevel document (pdf)

cannot see any pdfs

#### Flashcard 1486446529804

Tags
#deeplearning #neuralnetworks
Question
In order to satisfy all three of these prop erties, w e deﬁne the self-information of an event x = x to be [...]