# on 30-Jan-2018 (Tue)

#### Flashcard 1732720069900

Question
the domain of definition of a function is [...] for which the function is defined.
the set of "input" values

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In mathematics, and more specifically in naive set theory, the domain of definition (or simply the domain) of a function is the set of "input" or argument values for which the function is defined.

#### Original toplevel document

Domain of a function - Wikipedia
main (disambiguation). [imagelink] Illustration showing f, a function from the pink domain X to the blue codomain Y. The yellow oval inside Y is the image of f. Both the image and the codomain are sometimes called the range of f. <span>In mathematics, and more specifically in naive set theory, the domain of definition (or simply the domain) of a function is the set of "input" or argument values for which the function is defined. That is, the function provides an "output" or value for each member of the domain.  Conversely, the set of values the function takes on as output is termed the image of th

#### Flashcard 1739356507404

Tags
#linear-algebra #matrix-decomposition
Question
Cholesky decomposability implies that if A can be written as LL* for some invertible L then A is [...]
Hermitian and positive definite.

L can be lower triangular or otherwise,

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite.

#### Original toplevel document

Cholesky decomposition - Wikipedia
s 7 Generalization 8 Implementations in programming languages 9 See also 10 Notes 11 References 12 External links 12.1 History of science 12.2 Information 12.3 Computer code 12.4 Use of the matrix in simulation 12.5 Online calculators <span>Statement[edit source] The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form A = L L ∗ , {\displaystyle \mathbf {A} =\mathbf {LL} ^{*},} where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition.  If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero.  When A has real entries, L has real entries as well, and the factorization may be written A = LL T .  The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. LDL decomposition[edit source] A closely related variant of the classical Cholesky decomposition is the LDL decomposition, A =

#### Flashcard 1739928505612

Tags
#forward-backward-algorithm #hmm
Question

In the first pass, the forward–backward algorithm computes [...] .

the distribution over hidden states given the observations up to the point.

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
The forward-backward algorithm In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all , the probability of ending up in any particular state given the first observations in the sequence, i.e. . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point , i.e. . Thes

#### Original toplevel document

Forward–backward algorithm - Wikipedia
cific instance of this class. Contents [hide] 1 Overview 2 Forward probabilities 3 Backward probabilities 4 Example 5 Performance 6 Pseudocode 7 Python example 8 See also 9 References 10 External links Overview[edit source] <span>In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all k ∈ { 1 , … , t } {\displaystyle k\in \{1,\dots ,t\}} , the probability of ending up in any particular state given the first k {\displaystyle k} observations in the sequence, i.e. P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:k})} . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point k {\displaystyle k} , i.e. P ( o k + 1 : t | X k ) {\displaystyle P(o_{k+1:t}\ |\ X_{k})} . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: P ( X k | o 1 : t ) = P ( X k | o 1 : k , o k + 1 : t ) ∝ P ( o k + 1 : t | X k ) P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:t})=P(X_{k}\ |\ o_{1:k},o_{k+1:t})\propto P(o_{k+1:t}\ |\ X_{k})P(X_{k}|o_{1:k})} The last step follows from an application of the Bayes' rule and the conditional independence of o k + 1 : t {\displaystyle o_{k+1:t}} and o 1 : k {\displaystyle o_{1:k}} given X k {\displaystyle X_{k}} . As outlined above, the algorithm involves three steps: computing forward probabilities computing backward probabilities computing smoothed values. The forward and backward steps m

#### Annotation 1741235293452

 #poisson-process #stochastics If a Poisson point process has a parameter of the form , where is Lebegues measure, and is a constant, then the point process is called a homogeneous or stationary Poisson point process.

Poisson point process - Wikipedia
edit source] For all the different settings of the Poisson point process, the two key properties [b] of the Poisson distribution and complete independence play an important role.   Homogeneous Poisson point process[edit source] <span>If a Poisson point process has a parameter of the form Λ = ν λ {\displaystyle \textstyle \Lambda =\nu \lambda } , where ν {\displaystyle \textstyle \nu } is Lebegues measure, which assigns length, area, or volume to sets, and λ {\displaystyle \textstyle \lambda } is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, called rate or intensity, is related to the expected (or average) number of Poisson points existing in some bounded region,   where rate is usually used when the

#### Flashcard 1741238177036

Tags
#poisson-process #stochastics
Question
a homogeneous Poisson point process has a parameter of the form [...] ,

where is Lebegues measure, and is a constant

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
If a Poisson point process has a parameter of the form , where is Lebegues measure, and is a constant, then the point process is called a homogeneous or stationary Poisson point process.

#### Original toplevel document

Poisson point process - Wikipedia
edit source] For all the different settings of the Poisson point process, the two key properties [b] of the Poisson distribution and complete independence play an important role.   Homogeneous Poisson point process[edit source] <span>If a Poisson point process has a parameter of the form Λ = ν λ {\displaystyle \textstyle \Lambda =\nu \lambda } , where ν {\displaystyle \textstyle \nu } is Lebegues measure, which assigns length, area, or volume to sets, and λ {\displaystyle \textstyle \lambda } is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, called rate or intensity, is related to the expected (or average) number of Poisson points existing in some bounded region,   where rate is usually used when the

#### Flashcard 1741240536332

Tags
#poisson-process #stochastics
Question
In a homogeneous Poisson point process with , is [...] and is [...]
Lebegues measure, constant

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
If a Poisson point process has a parameter of the form , where is Lebegues measure, and is a constant, then the point process is called a homogeneous or stationary Poisson point process.

#### Original toplevel document

Poisson point process - Wikipedia
edit source] For all the different settings of the Poisson point process, the two key properties [b] of the Poisson distribution and complete independence play an important role.   Homogeneous Poisson point process[edit source] <span>If a Poisson point process has a parameter of the form Λ = ν λ {\displaystyle \textstyle \Lambda =\nu \lambda } , where ν {\displaystyle \textstyle \nu } is Lebegues measure, which assigns length, area, or volume to sets, and λ {\displaystyle \textstyle \lambda } is a constant, then the point process is called a homogeneous or stationary Poisson point process. The parameter, called rate or intensity, is related to the expected (or average) number of Poisson points existing in some bounded region,   where rate is usually used when the

#### Annotation 1741247614220

 #probability In mathematics, a degenerate distribution is a probability distribution in a space (discrete or continuous) with support only on a space of lower dimension.

Degenerate distribution - Wikipedia
e i k 0 t {\displaystyle e^{ik_{0}t}\,} <span>In mathematics, a degenerate distribution is a probability distribution in a space (discrete or continuous) with support only on a space of lower dimension. If the degenerate distribution is univariate (involving only a single random variable) it is a deterministic distribution and takes only a single value. Examples include a two-headed co

#### Flashcard 1741249187084

Tags
#probability
Question

a degenerate distribution in a space has support only on [...]
a space of lower dimension

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In mathematics, a degenerate distribution is a probability distribution in a space (discrete or continuous) with support only on a space of lower dimension.

#### Original toplevel document

Degenerate distribution - Wikipedia
e i k 0 t {\displaystyle e^{ik_{0}t}\,} <span>In mathematics, a degenerate distribution is a probability distribution in a space (discrete or continuous) with support only on a space of lower dimension. If the degenerate distribution is univariate (involving only a single random variable) it is a deterministic distribution and takes only a single value. Examples include a two-headed co

#### Flashcard 1741250759948

Tags
#probability
Question
a [...] distribution in a space has support only on a space of lower dimension.
degenerate

status measured difficulty not learned 37% [default] 0

#### Parent (intermediate) annotation

Open it
In mathematics, a degenerate distribution is a probability distribution in a space (discrete or continuous) with support only on a space of lower dimension.

#### Original toplevel document

Degenerate distribution - Wikipedia
e i k 0 t {\displaystyle e^{ik_{0}t}\,} <span>In mathematics, a degenerate distribution is a probability distribution in a space (discrete or continuous) with support only on a space of lower dimension. If the degenerate distribution is univariate (involving only a single random variable) it is a deterministic distribution and takes only a single value. Examples include a two-headed co

#### Annotation 1741268847884

 #measure-theory Random variables are measurable functions from the probability space to $$\mathbb{R}^n$$. Measurable functions are functions between two measurable spaces with measurable preimages. Probability spaces are measurable spaces with probability measures. Probability measures are positive measures with $$\mathbb{P}(\Omega) = 1$$. A measurable space is any arbitrary set equipped with a sigma-algebra. A sigma-algebra is one collection of subsets that are measurable. Measurable means we can systematically assign values to each subset in the sigma-algebra.

#### pdf

cannot see any pdfs

#### Annotation 1741300305164

 All forms of memory provide an array of bytes. Each byte has its own address. Interaction is achieved through a sequence of load or store instructions to speciﬁc memory addresses. The load instruction moves a byte or word from main memory to an internal register within the CPU,whereasthe store instruction moves the content of a register to main memory. Aside from explicit loads and stores, the CPU automatically loads instructions from main memory for execution

#### pdf

cannot see any pdfs

#### Annotation 1741301878028

 A typical instruction–execution cycle, as executed on a system with a von Neumann architecture, ﬁrst fetches an instruction from memory and stores that instruction in the instruction register. The instruction is then decoded and may cause operands to be fetched from memory and stored in some internal register. After the instruction on the operands has been executed, the result may be stored back in memory.

#### pdf

cannot see any pdfs

#### Annotation 1741303450892

 Ideally, we want the programs and data to reside in main memory permanently. This arrangement usually is not possible for the following two reasons: 1. Main memory is usually too small to store all needed programs and data permanently. 2. Main memory is a volatile storage device that loses its contents when power is turned off or otherwise lost. Thus, most computer systems provide secondary storage as an extension of main memory. The main requirement for secondary storage is that it be able to hold large quantities of data permanently. The most common secondary-storage device is a magnetic disk, which provides storage for both programs and data. Most programs (system and application) are stored on a disk until they are loaded into memory. Many programs then use the disk as both the source and the destination of their processing. Hence, the proper management of disk storage is of central importance to a computer system,

#### pdf

cannot see any pdfs

#### Annotation 1741305023756

 As mentioned earlier, volatile storage loses its contents when the power to the device is removed.

#### pdf

cannot see any pdfs

#### Annotation 1741306596620

 Solid-state disks have several variants but in general are faster than magnetic disks and are nonvolatile. One type of solid-state disk stores data in a large DRAM array during normal operation but also contains a hidden magnetic hard disk and a battery for backup power. If external power is interrupted, this solid-state disk’s controller copies the data from RAM to the magnetic disk. When external power is restored, the controller copies the data back into RAM. Another form of solid-state disk is ﬂash memory, which is popular in cameras and personal digital assistants ( PDAs), in robots, and increasingly for storage on general-purpose computers. Flash memory is slower than DRAM but needs no power to retain its contents. Another form of nonvolatile storage is NVRAM, which is DRAM with battery backup power. This memory can be as fast as DRAM and (as long as the battery lasts) is nonvolatile.

#### pdf

cannot see any pdfs

#### Annotation 1741309742348

 A general-purpose computer system consists of CPUs and multiple device controllers that are connected through a common bus. Each device controller is in charge of a speciﬁc type of device. Depending on the controller, more than one device may be attached.

#### pdf

cannot see any pdfs

#### Annotation 1741311315212

 A device controller maintains some local buffer storage and a set of special-purpose registers. The device controller is responsible for moving the data between the peripheral devices that it controls and its local buffer storage. Typically, operating systems have a device driver for each device controller. This device driver understands the device controller and provides the rest of the operating system with a uniform interface to the device. To start an I/O operation, the device driver loads the appropriate registers within the device controller. The device controller, in turn, examines the contents of these registers to determine what action to take (such as “read a character from the keyboard”). The controller starts the transfer of data from the device to its local buffer. Once the transfer of data is complete, the device controller informs the device driver via an interrupt that it has ﬁnished its operation. The device driver then returns control to the operating system, possibly returning the data or a pointer to the data if the operation was a read. For other operations, the device driver returns status information. This form of interrupt-driven I/O is ﬁne for moving small amounts of data but can produce high overhead when used for bulk data movement such as disk I/O. To solve this problem, direct memory access (DMA) is used. After setting up buffers, pointers, and counters for the I/O device, the device controller transfers an entire block of data directly to or from its own buffer storage to memory, with no intervention by the CPU. Only one interrupt is generated per block, to tell the device driver that the operation has completed, rather than the one interrupt per byte generated for low-speed devices. While the device controller is performing these operations, the CPU is available to accomplish other work

#### pdf

cannot see any pdfs

#### Annotation 1741313150220

 Until recently, most computer systems used a single processor. On a single- processor system, there is one main CPU capable of executing a general-purpose instruction set, including instructions from user processes. Almost all single- processor systems have other special-purpose processors as well. They may come in the form of device-speciﬁc processors, such as disk, keyboard, and graphics controllers; or, on mainframes, they may come in the form of more general-purpose processors, such as I/O processors that move data rapidly among the components of the system. All of these special-purpose processors run a limited instruction set and do not run user processes. Sometimes, they are managed by the operating system, in that the operating system sends them information about their next task and monitors their status. For example, a disk-controller microprocessor receives a sequence of requests from the main CPU and implements its own disk queue and scheduling algorithm. This arrangement relieves the main CPU of the overhead of disk scheduling. PCs contain a microprocessor in the keyboard to convert the keystrokes into codes to be sent to the CPU. In other systems or circumstances, special-purpose processors are low-level components built into the hardware. The operating system cannot communicate with these processors; they do their jobs autonomously. The use of special-purpose microprocessors is common and does not turn a single-processor system into multi-processor system

#### pdf

cannot see any pdfs

#### Annotation 1741315509516

 Multiprocessor systems have three main advantages: 1. Increased throughput. By increasing the number of processors, we expect to get more work done in less time. The speed-up ratio with N processors is not N, however; rather, it is less than N. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus contention for shared resources, lowers the expected gain from additional processors. Similarly, N programmers working closely together do not produce N times the amount of work a single programmer would produce. 2. Economy of scale. Multiprocessor systems can cost less than equivalent multiple single-processor systems, because they can share peripherals, mass storage, and power supplies. If several programs operate on the same set of data, it is cheaper to store those data on one disk and to have all the processors share them than to have many computers with local disks and many copies of the data. 3. Increased reliability. If functions can be distributed properly among several processors, then the failure of one processor will not halt the system, only slow it down. If we have ten processors and one fails, then each of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the entire system runs only 10 percent slower, rather than failing altogether.

#### pdf

cannot see any pdfs

#### Annotation 1741343296780

 Call Money/ Notice Money used by Banks to borrow money w/o collateral from other banks to maintain CRR Call money market- funds are transacted on overnight basis & notice money market,-- btwn 2-14 days An over-the-counter (OTC) market -no brokers Highly liquid All scheduled Commercial Banks (excluding RRBs), Cooperative Banks other than Land Development banks and Primary dealers are the participants Actions like banks subscribing to large issues of government securities, increase in CRR or repo rate, = low liquidity - increase in call rate Call Rate: The interest rate paid on call loans NSE Mumbai Inter-Bank Bid Rate (MIBID) and the NSE Mumbai Inter-Bank Offer Rate (MIBOR) for overnight money markets: MIBID: In this, borrower banks quote an interest rate MIBOR: In this, lender banks quote a rate Term Market: A market where maturity of debt btwn 3 months to 1 year