Edited, memorised or added to reading list

on 29-Jan-2018 (Mon)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 1730911014156

[unknown IMAGE 1739364109580]
Tags
#has-images
Question
The posterior distribution (of the objective function) is used to construct the [...]
Answer
acquisition function


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The posterior distribution (of the objective function), in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines what the next query point should be.

Original toplevel document

Bayesian optimization - Wikipedia
erences 8 External links History[edit source] The term is generally attributed to Jonas Mockus and is coined in his work from a series of publications on global optimization in the 1970s and 1980s. [2] [3] [4] Strategy[edit source] <span>Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it. The prior captures our beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines what the next query point should be. Examples[edit source] Examples of acquisition functions include probability of improvement, expected improvement, Bayesian expected losses, upper confidence bounds (UCB), Thompson s







Flashcard 1739352313100

Tags
#linear-algebra #matrix-decomposition
Question

The Cholesky decomposition only works properly for [...] matrices


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. </

Original toplevel document

Cholesky decomposition - Wikipedia
s 7 Generalization 8 Implementations in programming languages 9 See also 10 Notes 11 References 12 External links 12.1 History of science 12.2 Information 12.3 Computer code 12.4 Use of the matrix in simulation 12.5 Online calculators <span>Statement[edit source] The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form A = L L ∗ , {\displaystyle \mathbf {A} =\mathbf {LL} ^{*},} where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. [2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero. [3] When A has real entries, L has real entries as well, and the factorization may be written A = LL T . [4] The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. LDL decomposition[edit source] A closely related variant of the classical Cholesky decomposition is the LDL decomposition, A =







#linear-algebra #matrix-decomposition
The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
position is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. <span>The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. <span><body><html>

Original toplevel document

Cholesky decomposition - Wikipedia
s 7 Generalization 8 Implementations in programming languages 9 See also 10 Notes 11 References 12 External links 12.1 History of science 12.2 Information 12.3 Computer code 12.4 Use of the matrix in simulation 12.5 Online calculators <span>Statement[edit source] The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form A = L L ∗ , {\displaystyle \mathbf {A} =\mathbf {LL} ^{*},} where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. [2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero. [3] When A has real entries, L has real entries as well, and the factorization may be written A = LL T . [4] The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. LDL decomposition[edit source] A closely related variant of the classical Cholesky decomposition is the LDL decomposition, A =




Flashcard 1739356507404

Tags
#linear-algebra #matrix-decomposition
Question
Cholesky decomposability implies that if A can be written as LL* for some invertible L then A is [...]
Answer
Hermitian and positive definite.

L can be lower triangular or otherwise,


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite.

Original toplevel document

Cholesky decomposition - Wikipedia
s 7 Generalization 8 Implementations in programming languages 9 See also 10 Notes 11 References 12 External links 12.1 History of science 12.2 Information 12.3 Computer code 12.4 Use of the matrix in simulation 12.5 Online calculators <span>Statement[edit source] The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form A = L L ∗ , {\displaystyle \mathbf {A} =\mathbf {LL} ^{*},} where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. [2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero. [3] When A has real entries, L has real entries as well, and the factorization may be written A = LL T . [4] The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. LDL decomposition[edit source] A closely related variant of the classical Cholesky decomposition is the LDL decomposition, A =







#linear-algebra #matrix-decomposition
If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
eal and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. [2] <span>If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero. [3] When A has real entries, L has real entries as well, and the factorization may be written A = LL T . [4] The Cholesky decomposition is unique when A is pos

Original toplevel document

Cholesky decomposition - Wikipedia
s 7 Generalization 8 Implementations in programming languages 9 See also 10 Notes 11 References 12 External links 12.1 History of science 12.2 Information 12.3 Computer code 12.4 Use of the matrix in simulation 12.5 Online calculators <span>Statement[edit source] The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form A = L L ∗ , {\displaystyle \mathbf {A} =\mathbf {LL} ^{*},} where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. [2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero. [3] When A has real entries, L has real entries as well, and the factorization may be written A = LL T . [4] The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. LDL decomposition[edit source] A closely related variant of the classical Cholesky decomposition is the LDL decomposition, A =




Flashcard 1739359653132

Tags
#linear-algebra #matrix-decomposition
Question
If the matrix A is [...], then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero.
Answer
Hermitian and positive semi-definite


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero.

Original toplevel document

Cholesky decomposition - Wikipedia
s 7 Generalization 8 Implementations in programming languages 9 See also 10 Notes 11 References 12 External links 12.1 History of science 12.2 Information 12.3 Computer code 12.4 Use of the matrix in simulation 12.5 Online calculators <span>Statement[edit source] The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form A = L L ∗ , {\displaystyle \mathbf {A} =\mathbf {LL} ^{*},} where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. [2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero. [3] When A has real entries, L has real entries as well, and the factorization may be written A = LL T . [4] The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. LDL decomposition[edit source] A closely related variant of the classical Cholesky decomposition is the LDL decomposition, A =







Flashcard 1739361225996

Tags
#linear-algebra #matrix-decomposition
Question
If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if [...].
Answer
the diagonal entries of L are allowed to be zero


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero.

Original toplevel document

Cholesky decomposition - Wikipedia
s 7 Generalization 8 Implementations in programming languages 9 See also 10 Notes 11 References 12 External links 12.1 History of science 12.2 Information 12.3 Computer code 12.4 Use of the matrix in simulation 12.5 Online calculators <span>Statement[edit source] The Cholesky decomposition of a Hermitian positive-definite matrix A is a decomposition of the form A = L L ∗ , {\displaystyle \mathbf {A} =\mathbf {LL} ^{*},} where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. [2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero. [3] When A has real entries, L has real entries as well, and the factorization may be written A = LL T . [4] The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such that A = LL*. However, the decomposition need not be unique when A is positive semidefinite. The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite. LDL decomposition[edit source] A closely related variant of the classical Cholesky decomposition is the LDL decomposition, A =







[unknown IMAGE 1739364109580]
#bayesian-optimisation #has-images
A simple illustration of Bayesian optimisation in one dimension.
  1. The goal is to maximise some true unknown function f (not shown).
  2. Information about this function is gained by making observations (circles, top panels), which are evaluations of the function at specific x values.
  3. These observations are used to infer a posterior distribution over the function values (shown as mean, blue line, and standard deviations, blue shaded area) representing the distribution of possible functions; note that uncertainty grows away from the observations.
  4. Based on this distribution over functions, an acquisition function is computed (green shaded area, bottom panels), which represents the gain from evaluating the unknown function f at different x values; note that the acquisition function is high where the posterior over f has both high mean and large uncertainty.
  5. Different acquisition functions can be used such as “expected improvement” or “information-gain”.
  6. The peak of the acquisition function (red line) is the best next point to evaluate, and is therefore chosen for evaluation (red dot, new observation).
The left and right panels show an example of what could happen after three and four functions evaluations, respectively.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




[unknown IMAGE 1739364109580]
#bayesian-optimisation #has-images
the acquisition function is high where the posterior over f has both high mean and large uncertainty.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
s away from the observations. Based on this distribution over functions, an acquisition function is computed (green shaded area, bottom panels), which represents the gain from evaluating the unknown function f at different x values; note that <span>the acquisition function is high where the posterior over f has both high mean and large uncertainty. Different acquisition functions can be used such as “expected improvement” or “information-gain”. The peak of the acquisition function (red line) is the best next point to evaluate, and

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 1739407625484

[unknown IMAGE 1739364109580]
Tags
#bayesian-optimisation #has-images
Question
the acquisition function is high where the posterior over f has both [...] and [...] .
Answer
both high mean and large uncertainty


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the acquisition function is high where the posterior over f has both high mean and large uncertainty.

Original toplevel document (pdf)

cannot see any pdfs







#kalman-filter
The underlying model of Kalman filter is similar to a hidden Markov model except that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Kalman filter - Wikipedia
stimate in the special case that all errors are Gaussian-distributed. Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. <span>The underlying model is similar to a hidden Markov model except that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Contents [hide] 1 History 2 Overview of the calculation 3 Example application 4 Technical description and context 5 Underlying dynamical system model 6 Details 6.1 Predict




Flashcard 1739414965516

Tags
#kalman-filter
Question
In Kalman filter [...] variables have Gaussian distributions.
Answer
all latent and observed


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The underlying model of Kalman filter is similar to a hidden Markov model except that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions.

Original toplevel document

Kalman filter - Wikipedia
stimate in the special case that all errors are Gaussian-distributed. Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. <span>The underlying model is similar to a hidden Markov model except that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Contents [hide] 1 History 2 Overview of the calculation 3 Example application 4 Technical description and context 5 Underlying dynamical system model 6 Details 6.1 Predict







#forward-backward-algorithm #hmm

The forward-backward algorithm

  1. In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all , the probability of ending up in any particular state given the first observations in the sequence, i.e. .
  2. In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point , i.e. .
  3. These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence:

The last step follows from an application of the Bayes' rule and the conditional independence of and given .

It remains to be seen, of course, how the forward and backward passes are actually calculated.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Forward–backward algorithm - Wikipedia
cific instance of this class. Contents [hide] 1 Overview 2 Forward probabilities 3 Backward probabilities 4 Example 5 Performance 6 Pseudocode 7 Python example 8 See also 9 References 10 External links Overview[edit source] <span>In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all k ∈ { 1 , … , t } {\displaystyle k\in \{1,\dots ,t\}} , the probability of ending up in any particular state given the first k {\displaystyle k} observations in the sequence, i.e. P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:k})} . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point k {\displaystyle k} , i.e. P ( o k + 1 : t | X k ) {\displaystyle P(o_{k+1:t}\ |\ X_{k})} . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: P ( X k | o 1 : t ) = P ( X k | o 1 : k , o k + 1 : t ) ∝ P ( o k + 1 : t | X k ) P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:t})=P(X_{k}\ |\ o_{1:k},o_{k+1:t})\propto P(o_{k+1:t}\ |\ X_{k})P(X_{k}|o_{1:k})} The last step follows from an application of the Bayes' rule and the conditional independence of o k + 1 : t {\displaystyle o_{k+1:t}} and o 1 : k {\displaystyle o_{1:k}} given X k {\displaystyle X_{k}} . As outlined above, the algorithm involves three steps: computing forward probabilities computing backward probabilities computing smoothed values. The forward and backward steps m




Flashcard 1739928505612

Tags
#forward-backward-algorithm #hmm
Question

In the first pass, the forward–backward algorithm computes [...]

Answer
.

the distribution over hidden states given the observations up to the point.


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The forward-backward algorithm In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all , the probability of ending up in any particular state given the first observations in the sequence, i.e. . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point , i.e. . Thes

Original toplevel document

Forward–backward algorithm - Wikipedia
cific instance of this class. Contents [hide] 1 Overview 2 Forward probabilities 3 Backward probabilities 4 Example 5 Performance 6 Pseudocode 7 Python example 8 See also 9 References 10 External links Overview[edit source] <span>In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all k ∈ { 1 , … , t } {\displaystyle k\in \{1,\dots ,t\}} , the probability of ending up in any particular state given the first k {\displaystyle k} observations in the sequence, i.e. P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:k})} . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point k {\displaystyle k} , i.e. P ( o k + 1 : t | X k ) {\displaystyle P(o_{k+1:t}\ |\ X_{k})} . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: P ( X k | o 1 : t ) = P ( X k | o 1 : k , o k + 1 : t ) ∝ P ( o k + 1 : t | X k ) P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:t})=P(X_{k}\ |\ o_{1:k},o_{k+1:t})\propto P(o_{k+1:t}\ |\ X_{k})P(X_{k}|o_{1:k})} The last step follows from an application of the Bayes' rule and the conditional independence of o k + 1 : t {\displaystyle o_{k+1:t}} and o 1 : k {\displaystyle o_{1:k}} given X k {\displaystyle X_{k}} . As outlined above, the algorithm involves three steps: computing forward probabilities computing backward probabilities computing smoothed values. The forward and backward steps m







Flashcard 1739930864908

Tags
#forward-backward-algorithm #hmm
Question

In the second pass, the forward-backward algorithm computes [...]

Answer
.

the probability of observing the remaining observations given any starting point


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
rward probabilities which provide, for all , the probability of ending up in any particular state given the first observations in the sequence, i.e. . In the second pass, the algorithm computes a set of backward probabilities which provide <span>the probability of observing the remaining observations given any starting point , i.e. . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence:

Original toplevel document

Forward–backward algorithm - Wikipedia
cific instance of this class. Contents [hide] 1 Overview 2 Forward probabilities 3 Backward probabilities 4 Example 5 Performance 6 Pseudocode 7 Python example 8 See also 9 References 10 External links Overview[edit source] <span>In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all k ∈ { 1 , … , t } {\displaystyle k\in \{1,\dots ,t\}} , the probability of ending up in any particular state given the first k {\displaystyle k} observations in the sequence, i.e. P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:k})} . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point k {\displaystyle k} , i.e. P ( o k + 1 : t | X k ) {\displaystyle P(o_{k+1:t}\ |\ X_{k})} . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: P ( X k | o 1 : t ) = P ( X k | o 1 : k , o k + 1 : t ) ∝ P ( o k + 1 : t | X k ) P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:t})=P(X_{k}\ |\ o_{1:k},o_{k+1:t})\propto P(o_{k+1:t}\ |\ X_{k})P(X_{k}|o_{1:k})} The last step follows from an application of the Bayes' rule and the conditional independence of o k + 1 : t {\displaystyle o_{k+1:t}} and o 1 : k {\displaystyle o_{1:k}} given X k {\displaystyle X_{k}} . As outlined above, the algorithm involves three steps: computing forward probabilities computing backward probabilities computing smoothed values. The forward and backward steps m







Flashcard 1739933224204

Tags
#forward-backward-algorithm #hmm
Question

In the forward-backward algorithm, the forward and backward probability distributions are combined to obtain [...]

Answer
the distribution over states at any specific point in time given the entire observation sequence


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point , i.e. . These two sets of probability distributions can then be combined to obtain <span>the distribution over states at any specific point in time given the entire observation sequence: The last step follows from an application of the Bayes' rule and the conditional independence of and given . It remains to be seen, of course, how the forwa

Original toplevel document

Forward–backward algorithm - Wikipedia
cific instance of this class. Contents [hide] 1 Overview 2 Forward probabilities 3 Backward probabilities 4 Example 5 Performance 6 Pseudocode 7 Python example 8 See also 9 References 10 External links Overview[edit source] <span>In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all k ∈ { 1 , … , t } {\displaystyle k\in \{1,\dots ,t\}} , the probability of ending up in any particular state given the first k {\displaystyle k} observations in the sequence, i.e. P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:k})} . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point k {\displaystyle k} , i.e. P ( o k + 1 : t | X k ) {\displaystyle P(o_{k+1:t}\ |\ X_{k})} . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: P ( X k | o 1 : t ) = P ( X k | o 1 : k , o k + 1 : t ) ∝ P ( o k + 1 : t | X k ) P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:t})=P(X_{k}\ |\ o_{1:k},o_{k+1:t})\propto P(o_{k+1:t}\ |\ X_{k})P(X_{k}|o_{1:k})} The last step follows from an application of the Bayes' rule and the conditional independence of o k + 1 : t {\displaystyle o_{k+1:t}} and o 1 : k {\displaystyle o_{1:k}} given X k {\displaystyle X_{k}} . As outlined above, the algorithm involves three steps: computing forward probabilities computing backward probabilities computing smoothed values. The forward and backward steps m







Flashcard 1739934797068

Tags
#forward-backward-algorithm #hmm
Question

In the forward-backward algorithm, the formula for posterior marginals is [...]

Answer


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
bserving the remaining observations given any starting point , i.e. . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence<span>: The last step follows from an application of the Bayes' rule and the conditional independence of and given . It remains to be seen, of course, how the forwar

Original toplevel document

Forward–backward algorithm - Wikipedia
cific instance of this class. Contents [hide] 1 Overview 2 Forward probabilities 3 Backward probabilities 4 Example 5 Performance 6 Pseudocode 7 Python example 8 See also 9 References 10 External links Overview[edit source] <span>In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide, for all k ∈ { 1 , … , t } {\displaystyle k\in \{1,\dots ,t\}} , the probability of ending up in any particular state given the first k {\displaystyle k} observations in the sequence, i.e. P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:k})} . In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point k {\displaystyle k} , i.e. P ( o k + 1 : t | X k ) {\displaystyle P(o_{k+1:t}\ |\ X_{k})} . These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence: P ( X k | o 1 : t ) = P ( X k | o 1 : k , o k + 1 : t ) ∝ P ( o k + 1 : t | X k ) P ( X k | o 1 : k ) {\displaystyle P(X_{k}\ |\ o_{1:t})=P(X_{k}\ |\ o_{1:k},o_{k+1:t})\propto P(o_{k+1:t}\ |\ X_{k})P(X_{k}|o_{1:k})} The last step follows from an application of the Bayes' rule and the conditional independence of o k + 1 : t {\displaystyle o_{k+1:t}} and o 1 : k {\displaystyle o_{1:k}} given X k {\displaystyle X_{k}} . As outlined above, the algorithm involves three steps: computing forward probabilities computing backward probabilities computing smoothed values. The forward and backward steps m







#poisson-process #stochastics
For a collection of disjoint and bounded subregions of the underlying space, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Poisson point process - Wikipedia
sed to define the Poisson distribution. If a Poisson point process is defined on some underlying space, then the number of points in a bounded region of this space will be a Poisson random variable. [45] Complete independence[edit source] <span>For a collection of disjoint and bounded subregions of the underlying space, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others. This property is known under several names such as complete randomness, complete independence, [21] or independent scattering [46] [47] and is common to all Poisson point processes.




Flashcard 1739942661388

Tags
#poisson-process #stochastics
Question
For a collection of [...] subregions of the underlying space, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others.
Answer
disjoint and bounded


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
For a collection of disjoint and bounded subregions of the underlying space, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others. </h

Original toplevel document

Poisson point process - Wikipedia
sed to define the Poisson distribution. If a Poisson point process is defined on some underlying space, then the number of points in a bounded region of this space will be a Poisson random variable. [45] Complete independence[edit source] <span>For a collection of disjoint and bounded subregions of the underlying space, the number of points of a Poisson point process in each bounded subregion will be completely independent of all the others. This property is known under several names such as complete randomness, complete independence, [21] or independent scattering [46] [47] and is common to all Poisson point processes.







This exposition will have at its core a sequence of proofs designed to establish theorems. We shall distinguish among the theorems some which we shall call lemmas, propositions or corollaries. Traditionally, a lemma is a result of no intrinsic interest proved as a step towards the proof of a theorem; a proposition is a result of less inde- pendent importance than a theorem; and a corollary is an easy consequence of a theorem. The distinctions are of no formal significance, however, and we make use of them only as a way of providing signposts to the reader as to the relative importance of the results stated.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




at the beginning of our exposition there must be mathematical words or symbols which we do not define in terms of others but merely take as given: they are called primitives. And proof must start somewhere, just as definition must. If we are to avoid an infinite regress, there must be some propositions that are not proved but can be used in the proofs of the theorems. Such propositions are called axioms

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




At the core of attitudes to the axiomatic method that may be called realist is the view that ‘undefined’ does not entail ‘meaningless’ and so it may be possible to provide a meaning for the primitive terms of our theory in advance of laying down the axioms: perhaps they are previously understood terms of ordinary language; or, if not, we may be able to establish the intended meanings by means of what Frege calls elucidations — informal explanations which suffice to indicate the intended meanings of terms. But elucidation, Frege says, is inessential. It merely serves the purpose of mutual understanding among investigators, as well as of the communication of science to others. We may relegate it to a propaedeutic. It has no place in the system of a science; no conclusions are based on it. Someone who pursued research only by himself would not need it

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The axiomatic method providing an easy target for polemical attack by empiricists such as Lakatos (1976). It is nevertheless true that pure mathematicians at any rate regard its use as routine. How then should we account for it? Responses to this question fall into two camps which mathematicians have for some time been wont to call realist and formalist. This was not an alto- gether happy choice of terminology since philosophers had already used both words for more specific positions in the philosophy of mathematics, but I shall follow the mathematicians’ usage here. At the core of attitudes to the axiomatic method that may be called realist is the view that ‘undefined’ does not entail ‘meaningless’ and so it may be possible to provide a meaning for the primitive terms of our theory in advance of laying down the axioms: perhaps they are previously understood terms of ordinary language; or, if not, we may be able to establish the intended meanings by means of what Frege calls elucidations — informal explanations which suffice to indicate the intended meanings of terms. But elucidation, Frege says, is inessential. It merely serves the purpose of mutual understanding among investigators, as well as of the communication of science to others. We may relegate it to a propaedeutic. It has no place in the system of a science; no conclusions are based on it. Someone who pursued research only by himself would not need it. (1906, p. 302) If the primitive terms of our theory are words, such as ‘point’ or ‘line’, which can be given meanings in this manner, then by asserting the axioms of the theory we commit ourselves to their truth. Realism is thus committed to the notion that the words mathematicians use already have a meaning independ- ent of the system of axioms in which the words occur. It is for this reason that such views are described as realist. If the axioms make existential claims (which typically they do), then by taking them to be true we commit ourselves to the existence of the requisite objects. Nevertheless, realism remains a broad church, since it says nothing yet about the nature of the objects thus appealed to

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Two sorts of realist can be distinguished: a platonist takes the objects to exist independently of us and of our activities, and hence (since they are certainly not physical) to be in some sense abstract; a constructivist, on the other hand, takes the objects to exist only if they can be constructed, and hence to be in some sense mental

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




During the 19th century, however, there emerged another cluster of ways of regarding axioms, which we shall refer to as formalist. What they had in common was a rejection of the idea just mentioned that the axioms can be regarded simply as true statements about a subject matter external to them. One part of the motivation for the emergence of formalism lay in the different axiom systems for geometry — Euclidean, hyperbolic, projective, spherical — which mathematicians began to study. The words ‘point’ and ‘line’ occur in all, but the claims made using these words conflict. So they cannot all be true, at any rate not unconditionally. One view, then, is that axioms should be thought of as assumptions which we suppose in order to demonstrate the properties of those structures that exemplify them. The expositor of an ax- iomatic theory is thus as much concerned with truth on this view as on the realist one, but the truths asserted are conditional: if any structure satisfies the axioms, then it satisfies the theorem. This view has gone under various names in the literature — implicationism, deductivism, if-thenism, eliminat- ive structuralism. Here we shall call it implicationism

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




by conditionalizing all our theorems we omit to mention the existence of the structure in question, and therefore have work to do if we are to explain the applicability of the the- ory: the domain of any interpretation in which the axioms of arithmetic are true is infinite, and yet we confidently apply arithmetical theorems within the finite domain of our immediate experience without troubling to embed it in such an infinite domain as implicationism would require us to do. Implica- tionism seems capable, therefore, of being at best only part of the explanation of these classical cases

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




One of the evident attractions of the implicationist view of set theory is that it obviates the tedious requirement imposed on the realist to justify the axioms as true and replaces it with at most the (presumably weaker) requirement to persuade the reader to be interested in their logical consequences

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




One way of thinking of a structure is as a certain sort of set. So when we discuss the properties of structures satisfying the axioms of set theory, we seem already to be presupposing the notion of set. This is a version of an objection that is sometimes called Poincar ´ e’s petitio because Poincar ´ e (1906) advanced it against an attempt that had been made to use mathematical induction in the course of a justification of the axioms of arithmetic. In its crudest form this objection is easily evaded if we are sufficiently clear about what we are doing. There is no direct circularity if we presuppose sets in our study of sets (or induction in our study of induction) since the first occur- rence of the word is in the metalanguage, the second in the object language. Nevertheless, even if this is all that needs to be said to answer Poincar ´ e’s ob- jection in the general case, matters are not so straightforward in the case of a theory that claims to be foundational. If we embed mathematics in set the- ory and treat set theory implicationally, then mathematics — all mathematics — asserts only conditional truths about structures of a certain sort. But our metalinguistic study of set-theoretic structures is plainly recognizable as a spe- cies of mathematics. So we have no reason not to suppose that here too the correct interpretation of our results is only conditional. At no point, then, will mathematics assert anything unconditionally, and any application of any part whatever of mathematics that depends on the unconditional existence of mathematical objects will be vitiated.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Thoroughgoing implicationism — the view that mathematics has no sub- ject matter whatever and consists solely of the logical derivation of con- sequences from axioms — is thus a very harsh discipline: many mathem- aticians profess to believe it, but few hold unswervingly to what it entails. The implicationist is never entitled, for instance, to assert unconditionally that no proof of a certain proposition exists, since that is a generalization about proofs and must therefore be interpreted as a conditional depending on the axioms of proof theory. And conversely, the claim that a proposition is provable is to be interpreted only as saying that according to proof theory it is: a further inference is required if we are to deduce from this that there is indeed a proof. One response to this difficulty with taking an implicationist view of set the- ory is to observe that it arises only on the premise that set theory is intended as a foundation for mathematics. Deny the premise and the objection evap- orates. Recently some mathematicians have been tempted by the idea that other theories — topos theory or category theory, for example — might be better suited to play this foundational role

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




some mathem- aticians (e.g. Mayberry 1994) have tried simply to deny that mathematics has a foundation. But plainly more needs to be said if this is to be anything more substantial than an indefinite refusal to address the question. Another response to these difficulties, more popular among mathematicians than among philosophers, has been to espouse a stricter formalism, a version, that is to say, of the view that the primitive terms of an axiomatic theory refer to nothing outside of the theory itself. The crudest version of this doctrine, pure formalism, asserts that mathematics is no more than a game played with symbols. Frege’s demolition of this view (1893–1903, II, §§86–137) is treated by most philosophers as definitive. Indeed it has become popular to doubt whether any of the mathematicians Frege quotes actually held a view so stu- pid. However, there are undoubtedly some mathematicians who claim, when pressed, to believe it, and many others whose stated views entail it

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




10 Logic ficulties in relation to the axioms of whatever foundational theory they favour instead (cf. Shapiro 1991). Perhaps it is for this reason that some mathem- aticians (e.g. Mayberry 1994) have tried simply to deny that mathematics has a foundation. But plainly more needs to be said if this is to be anything more substantial than an indefinite refusal to address the question. Another response to these difficulties, more popular among mathematicians than among philosophers, has been to espouse a stricter formalism, a version, that is to say, of the view that the primitive terms of an axiomatic theory refer to nothing outside of the theory itself. The crudest version of this doctrine, pure formalism, asserts that mathematics is no more than a game played with symbols. Frege’s demolition of this view (1893–1903, II, §§86–137) is treated by most philosophers as definitive. Indeed it has become popular to doubt whether any of the mathematicians Frege quotes actually held a view so stu- pid. However, there are undoubtedly some mathematicians who claim, when pressed, to believe it, and many others whose stated views entail it. Less extreme is postulationism — which I have elsewhere (Potter 2000) called axiomatic formalism. This does not regard the sentences of an axiomatic theory as meaningless positions in a game but treats the primitive terms as deriving their meaning from the role they play in the axioms, which may now be thought of as an implicit definition of them, to be contrasted with the explicit definitions of the non-primitive terms. ‘The objects of the theory are defined ipso facto by the system of axioms, which in some way generate the material to which the true propositions will be applicable.’ (Cartan 1943, p. 9) This view is plainly not as daft as pure formalism, but if we are to espouse it, we presumably need some criterion to determine whether a system of axioms does confer meaning on its constituent terms. Those who advance this view agree that no such meaning can be conferred by an inconsistent system, and many from Hilbert on have thought that bare consistency is sufficient to confer meaning, but few have provided any argument for this, and without such an argument the position remains suspect.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




the great advantage of postulationism over implicationism is that if we are indeed entitled to postulate objects with the requisite properties, anything we deduce concerning these objects will be true unconditionally

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
One way to unify the discrete and continuous cases is to use the P and E operators (and related operators like var, cov, and cor) exclusively rather than writing sums and integrals.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
Note that P is a different function for each different probability distri- bution

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
An event is a subset of the sample space, but is \( \mathcal{A} \) all of the subsets of the sample space or just some of them? It turns out that, for very abstruse technical reasons, the answer is the latter.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
P gives probabilities of events, so it is a function \( A \mapsto P(A) \) that maps events to real numbers.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
E gives expectations of random variables, so it is a function \( X \mapsto E(X) \) that maps random variables to real numbers.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 1740067966220

[unknown IMAGE 1740066655500]
Tags
#has-images #measure-theory #stochastics
Question
measure theoretic notations for the relation between Probability measure P and Expectation operator E are [...]
Answer
The integral signs here do not mean integration in the sense of calculus (so-called Riemann integration).


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

pdf

cannot see any pdfs







Flashcard 1740992810252

Question
at the beginning of our exposition there must be mathematical words or symbols which we do not define in terms of others but merely take as given: they are called [...]. And proof must start somewhere, just as definition must. If we are to avoid an infinite regress, there must be some propositions that are not proved but can be used in the proofs of the theorems. Such propositions are called axioms
Answer
primitives


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
at the beginning of our exposition there must be mathematical words or symbols which we do not define in terms of others but merely take as given: they are called primitives. And proof must start somewhere, just as definition must. If we are to avoid an infinite regress, there must be some propositions that are not proved but can be used in the proofs of the

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741072239884

Question
At the core of attitudes to the axiomatic method that may be called realist is the view that [...] and so it may be possible to provide a meaning for the primitive terms of our theory in advance of laying down the axioms: perhaps they are previously understood terms of ordinary language; or, if not, we may be able to establish the intended meanings by means of what Frege calls elucidations — informal explanations which suffice to indicate the intended meanings of terms. But elucidation, Frege says, is inessential. It merely serves the purpose of mutual understanding among investigators, as well as of the communication of science to others. We may relegate it to a propaedeutic. It has no place in the system of a science; no conclusions are based on it. Someone who pursued research only by himself would not need it
Answer
‘undefined’ does not entail ‘meaningless’


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
At the core of attitudes to the axiomatic method that may be called realist is the view that ‘undefined’ does not entail ‘meaningless’ and so it may be possible to provide a meaning for the primitive terms of our theory in advance of laying down the axioms: perhaps they are previously understood terms of ordinary langu

Original toplevel document (pdf)

cannot see any pdfs







A computer system can be divided roughly into four components: the hardware, the operating system, the application programs, and the users

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




We can also view a computer system as consisting of hardware, software, and data. The operating system provides the means for proper use of these resources in the operation of the computer system. An operating system is similar to a government. Like a government, it performs no useful function by itself. It simply provides an environment within which other programs can do useful work.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The user’s view of the computer varies according to the interface being used. Most computer users sit in front of a PC,consistingofamonitor, keyboard, mouse, and system unit. Such a system is designed for one user

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




to monopolize its resources. The goal is to maximize the work (or play) that the user is performing. In this case, the operating system is designed mostly for ease of use, with some attention paid to performance and none paid to resource utilization—how various hardware and software resources are shared. Performance is, of course, important to the user; but such systems are optimized for the single-user experience rather than the requirements of multiple users. In other cases, a user sits at a terminal connected to a mainframe or a minicomputer. Other users are accessing the same computer through other terminals. These users share resources and may exchange information. The operating system in such cases is designed to maximize resource utilization— to assure that all available CPU time, memory, and I/O are used efficiently and that no individual user takes more than her fair share. In still other cases, users sit at workstations connected to networks of other workstations and servers. These users have dedicated resources at their disposal, but they also share resources such as networking and servers, including file, compute, and print servers. Therefore, their operating system is designed to compromise between individual usability and resource utilization. Recently, many varieties of mobile computers, such as smartphones and tablets, have come into fashion. Most mobile computers are standalone units for individual users. Quite often, they are connected to networks through cellular or other wireless technologies. Increasingly, these mobile devices are replacing desktop and laptop computers for people who are primarily interested in using computers for e-mail and web browsing. The user interface for mobile computers generally features a touch screen, where the user interacts with the system by pressing and swiping fingers across the screen rather than using a physical keyboard and mouse. Some computers have little or no user view. For example, embedded computers in home devices and automobiles may have numeric keypads and may turn indicator lights on or off to show status, but they and their operating systems are designed primarily to run without user intervention.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




From the computer’s point of view, the operating system is the program most intimately involved with the hardware. In this context, we can view an operating system as a resource allocator. A computer system has many resources that may be required to solve a problem: CPU time, memory space, file-storage space, I/O devices, and so on. The operating system acts as the manager of these resources. Facing numerous and possibly conflicting requests for resources, the operating system must decide how to allocate them to specific programs and users so that it can operate the computer system efficiently and fairly. As we have seen, resource allocation is especially important where many users access the same mainframe or minicomputer. A slightly different view of an operating system emphasizes the need to control the various I/O devices and user programs. An operating system is a control program. A control program manages the execution of user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory
Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive:

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
d volume of Euclidean geometry to suitable subsets of the n -dimensional Euclidean space R n . For instance, the Lebesgue measure of the interval [0, 1] in the real numbers is its length in the everyday sense of the word – specifically, 1. <span>Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive: the measure of a 'large' subset that can be decomposed into a finite (or countably infinite) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller"

Original toplevel document

Measure (mathematics) - Wikipedia
[imagelink] Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0. <span>In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the n-dimensional Euclidean space R n . For instance, the Lebesgue measure of the interval [0, 1] in the real numbers is its length in the everyday sense of the word – specifically, 1. Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive: the measure of a 'large' subset that can be decomposed into a finite (or countably infinite) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller" subsets. In general, if one wants to associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a σ-algebra. This means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are necessarily complicated in the sense of being badly mixed up with their complement. [1] Indeed, their existence is a non-trivial consequence of the axiom of choice. Measure theory was developed in successive stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon, and Maurice Fréchet, among others. The ma




In general, we have no completely adequate definition of an operating system. Operating systems exist because they offer a reasonable way to solve the problem of creating a usable computing system. The fundamental goal of computer systems is to execute user programs and to make solving user problems easier. Computer hardware is constructed toward this goal. Since bare hardware alone is not particularly easy to use, application programs are developed. These programs require certain common operations, such as those controlling the I/O devices. The common functions of controlling and allocating resources are then brought together into one piece of software: the operating system.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 1741091114252

Tags
#measure-theory
Question
Technically, a measure is a function that assigns [...] to (certain) subsets of a set X
Answer
a non-negative real number or +∞


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive:

Original toplevel document

Measure (mathematics) - Wikipedia
[imagelink] Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0. <span>In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the n-dimensional Euclidean space R n . For instance, the Lebesgue measure of the interval [0, 1] in the real numbers is its length in the everyday sense of the word – specifically, 1. Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive: the measure of a 'large' subset that can be decomposed into a finite (or countably infinite) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller" subsets. In general, if one wants to associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a σ-algebra. This means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are necessarily complicated in the sense of being badly mixed up with their complement. [1] Indeed, their existence is a non-trivial consequence of the axiom of choice. Measure theory was developed in successive stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon, and Maurice Fréchet, among others. The ma







Flashcard 1741092687116

Tags
#measure-theory
Question
Technically, a measure is a function that assigns a non-negative real number or +∞ to [...] (see Definition below).
Answer
(certain) subsets of a set X


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive:

Original toplevel document

Measure (mathematics) - Wikipedia
[imagelink] Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0. <span>In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the n-dimensional Euclidean space R n . For instance, the Lebesgue measure of the interval [0, 1] in the real numbers is its length in the everyday sense of the word – specifically, 1. Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive: the measure of a 'large' subset that can be decomposed into a finite (or countably infinite) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller" subsets. In general, if one wants to associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a σ-algebra. This means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are necessarily complicated in the sense of being badly mixed up with their complement. [1] Indeed, their existence is a non-trivial consequence of the axiom of choice. Measure theory was developed in successive stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon, and Maurice Fréchet, among others. The ma







Flashcard 1741094259980

Tags
#measure-theory
Question
As its one singularly important property, a measure must be [...]


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive:

Original toplevel document

Measure (mathematics) - Wikipedia
[imagelink] Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0. <span>In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the n-dimensional Euclidean space R n . For instance, the Lebesgue measure of the interval [0, 1] in the real numbers is its length in the everyday sense of the word – specifically, 1. Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X (see Definition below). It must further be countably additive: the measure of a 'large' subset that can be decomposed into a finite (or countably infinite) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller" subsets. In general, if one wants to associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a σ-algebra. This means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are necessarily complicated in the sense of being badly mixed up with their complement. [1] Indeed, their existence is a non-trivial consequence of the axiom of choice. Measure theory was developed in successive stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon, and Maurice Fréchet, among others. The ma







#probability-measure
The difference between a probability measure and the more general notion of measure (which includes concepts like area or volume) is that a probability measure must assign value 1 to the entire probability space.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Probability measure - Wikipedia
inequality Venn diagram Tree diagram v t e In mathematics, a probability measure is a real-valued function defined on a set of events in a probability space that satisfies measure properties such as countable additivity. [3] <span>The difference between a probability measure and the more general notion of measure (which includes concepts like area or volume) is that a probability measure must assign value 1 to the entire probability space. Intuitively, the additivity property says that the probability assigned to the union of two disjoint events by the measure should be the sum of the probabilities of the events, e.g. t




Flashcard 1741100027148

Tags
#probability-measure
Question
Compared to the more general notion of measure, a probability measure must assign value 1 to [...].
Answer
the entire probability space


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The difference between a probability measure and the more general notion of measure (which includes concepts like area or volume) is that a probability measure must assign value 1 to the entire probability space.

Original toplevel document

Probability measure - Wikipedia
inequality Venn diagram Tree diagram v t e In mathematics, a probability measure is a real-valued function defined on a set of events in a probability space that satisfies measure properties such as countable additivity. [3] <span>The difference between a probability measure and the more general notion of measure (which includes concepts like area or volume) is that a probability measure must assign value 1 to the entire probability space. Intuitively, the additivity property says that the probability assigned to the union of two disjoint events by the measure should be the sum of the probabilities of the events, e.g. t







In addition, we have no universally accepted definition of what is part of the operating system. A simple viewpoint is that it includes everything a vendor ships when you order “the operating system.” The features included, however, vary greatly across systems. Some systems take up less than a megabyte of space and lack even a full-screen editor, whereas others require gigabytes of space and are based entirely on graphical windowing systems. A more common definition, and the one that we usually follow, is that the operating system is the one program running at all times on the computer—usually called the kernel. (Along with the kernel, there are two other types of programs: system programs, which are associated with the operating system but are not necessarily part of the kernel, and application programs, which include all programs not associated with the operation of the system.)

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#expectation-operator

The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution.[3] For random variables such as these, the long-tails of the distribution prevent the sum/integral from converging.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

Expected value - Wikipedia
on subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure. [1] [2] <span>The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution. [3] For random variables such as these, the long-tails of the distribution prevent the sum/integral from converging. The expected value is a key aspect of how one characterizes a probability distribution; it is one type of location parameter. By contrast, the variance is a measure of dispersion of t




#measure-theory #stochastics
Let Ω be an arbitrary set. A sigma-algebra for Ω is a family \(\mathcal{A}\) of subsets of Ω that
  1. contains Ω and
  2. is closed under complements and countable unions and intersections.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
The smallest sigma-algebra is {∅, Ω}. It must contain Ω by definition, and it must contain ∅ because it is Ω c . Unions and intersections of Ω and ∅ give us the same sets back, no new sets.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
The largest sigma-algebra is the set of all subsets of Ω, called the power set of Ω.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
A set Ω equipped with a sigma-algebra \(\mathcal{A}\) is called a measurable space and usually denoted as a pair \( ( \Omega, \mathcal{A} ) \). In this context, the elements of \(\mathcal{A}\) are called measurable sets.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
A positive measure on a measurable space \( (\Omega, \mathcal{A} ) \) is a function µ : A → R ∪ {∞} that satisfies µ(A) ≥ 0, A ∈ \(\mathcal{A}\).

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
A signed measure on a measurable space \( (\Omega, \mathcal{A} ) \) is a function µ : A → R that is countably additive

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
Counting measure is a positive measure counts the number of points in a set

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
Lebesgue measure on R corresponds to the dx of ordinary calculus: \( \mu(A) = \int_A dx \) whenever A is a set over which the Riemann integral is defined.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
If P and Q are probability measures, then P − Q is a signed measure, so we need signed measures to compare probability measures. If µ and ν are signed measures and a and b are real numbers, then aµ+bν is a signed measure, so the family of all signed measures on a measurable space is a vector space. The latter explains why signed measures are of interest in real analysis. The former explains why they are of interest in probability theory

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#measure-theory #stochastics
A measurable space (Ω, A) equipped with a measure µ, either a positive measure or a signed measure, is called a measure space and usually denoted as a triple (Ω, A, µ).

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Flashcard 1741123357964

Tags
#expectation-operator
Question

The expected value does not exist for random variables having some distributions with [...], such as the Cauchy distribution.[3]

Answer
large "tails"


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution. [3] For random variables such as these, the long-tails of the distribution prevent the sum/integral from converging.

Original toplevel document

Expected value - Wikipedia
on subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure. [1] [2] <span>The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution. [3] For random variables such as these, the long-tails of the distribution prevent the sum/integral from converging. The expected value is a key aspect of how one characterizes a probability distribution; it is one type of location parameter. By contrast, the variance is a measure of dispersion of t







Flashcard 1741124930828

Tags
#measure-theory #stochastics
Question
One way to [...] is to use the P and E operators (and related operators like var, cov, and cor) exclusively rather than writing sums and integrals.
Answer
unify the discrete and continuous cases


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
One way to unify the discrete and continuous cases is to use the P and E operators (and related operators like var, cov, and cor) exclusively rather than writing sums and integrals.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741126503692

Tags
#measure-theory #stochastics
Question
One way to unify the discrete and continuous cases is to use [...] (and related operators like var, cov, and cor) exclusively rather than writing sums and integrals.
Answer
the P and E operators


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
One way to unify the discrete and continuous cases is to use the P and E operators (and related operators like var, cov, and cor) exclusively rather than writing sums and integrals.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741128076556

Tags
#measure-theory #stochastics
Question
Note that P is [...] for each different probability distribution
Answer
a different measure

A measure is a function that maps a set to a non-negative number


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Note that P is a different function for each different probability distri- bution

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741132270860

Tags
#measure-theory #stochastics
Question
P gives probabilities of events, so it is a function \( A \mapsto P(A) \) that [...].
Answer
maps events to real numbers


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
P gives probabilities of events, so it is a function A↦P(A) that maps events to real numbers.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741134630156

Tags
#measure-theory #stochastics
Question
E gives expectations of random variables, so it is a function \( X \mapsto E(X) \) that maps [...] to [...]
Answer
random variables to real numbers.

Since the E operator operates on random variables, it's a function of functions; that is, it's a function that takes functions as input.


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
E gives expectations of random variables, so it is a function X↦E(X) that maps random variables to real numbers.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741136989452

Tags
#measure-theory #stochastics
Question
A [...] for set Ω is a family \(\mathcal{A}\) of subsets of Ω that contains Ω and is closed under complements and countable unions and intersections.
Answer
sigma-algebra


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Let Ω be an arbitrary set. A sigma-algebra for Ω is a family  of subsets of Ω that contains Ω and is closed under complements and countable unions and intersections.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741139348748

Tags
#measure-theory #stochastics
Question
A sigma-algebra for Ω must contain [...]
Answer


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Let Ω be an arbitrary set. A sigma-algebra for Ω is a family  of subsets of Ω that contains Ω and is closed under complements and countable unions and intersections.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741140921612

Tags
#measure-theory #stochastics
Question
A sigma-algebra for Ω is closed under [...].
Answer
complements and countable unions and intersections

Is this property stopping the Banach-Tarski paradox?


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Let Ω be an arbitrary set. A sigma-algebra for Ω is a family of subsets of Ω that contains Ω and is closed under complements and countable unions and intersections.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741142494476

Tags
#measure-theory #stochastics
Question
The smallest sigma-algebra is [...].
Answer
{∅, Ω}


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The smallest sigma-algebra is {∅, Ω}. It must contain Ω by definition, and it must contain ∅ because it is Ω c . Unions and intersections of Ω and ∅ give us the same sets back, no new sets.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741144067340

Tags
#measure-theory #stochastics
Question
The largest sigma-algebra is [...], called the power set of Ω.
Answer
the set of all subsets of Ω


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The largest sigma-algebra is the set of all subsets of Ω, called the power set of Ω.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741145640204

Tags
#measure-theory #stochastics
Question
The largest sigma-algebra is the set of all subsets of Ω, called [...].
Answer
the power set of Ω


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
The largest sigma-algebra is the set of all subsets of Ω, called the power set of Ω.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741147999500

Tags
#measure-theory #stochastics
Question
[...] is called a measurable space
Answer
A set Ω equipped with a sigma-algebra


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A set Ω equipped with a sigma-algebra  is called a measurable space and usually denoted as a pair (Ω,). In this context, the elements of  are called measurable sets.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741150358796

Tags
#measure-theory #stochastics
Question
A set Ω equipped with a sigma-algebra \(\mathcal{A}\) is called a [...]
Answer
measurable space


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A set Ω equipped with a sigma-algebra  is called a measurable space and usually denoted as a pair (Ω,). In this context, the elements of  are called measurable sets.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741151931660

Tags
#measure-theory #stochastics
Question
the elements of a sigma-algebra \(\mathcal{A}\) are called [...].
Answer
measurable sets


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A set Ω equipped with a sigma-algebra is called a measurable space and usually denoted as a pair . In this context, the elements of are called measurable sets.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741155077388

Tags
#measure-theory #stochastics
Question
A [...] on a measurable space \( (\Omega, \mathcal{A} ) \) is a function µ : A → R ∪ {∞} that satisfies µ(A) ≥ 0, A ∈ \(\mathcal{A}\).
Answer
positive measure


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A positive measure on a measurable space (Ω,) is a function µ : A → R ∪ {∞} that satisfies µ(A) ≥ 0, A ∈ .

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741157436684

Tags
#measure-theory #stochastics
Question
A [...] on a measurable space \( (\Omega, \mathcal{A} ) \) is a function µ : A → R that is countably additive
Answer
signed measure


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A signed measure on a measurable space (Ω,) is a function µ : A → R that is countably additive

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741159009548

Tags
#measure-theory #stochastics
Question
A signed measure on a measurable space \( (\Omega, \mathcal{A} ) \) is a function [...] that is countably additive
Answer
µ : A → R


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A signed measure on a measurable space is a function µ : A → R that is countably additive

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741160582412

Tags
#measure-theory #stochastics
Question
[...] is a positive measure counts the number of points in a set
Answer
Counting measure


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Counting measure is a positive measure counts the number of points in a set

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741162155276

Tags
#measure-theory #stochastics
Question
Counting measure is a positive measure counts [...]
Answer
the number of points in a set


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Counting measure is a positive measure counts the number of points in a set

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741164514572

Tags
#measure-theory #stochastics
Question
Lebesgue measure for a set A on R corresponds to [...] of ordinary calculus
Answer
dx


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741166873868

Tags
#measure-theory #stochastics
Question
[...] corresponds to the dx of ordinary calculus
Answer
Lebesgue measure on R


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Lebesgue measure on R corresponds to the dx of ordinary calculus: μ(A)=∫Adx whenever A is a set over which the Riemann integral is defined.

Original toplevel document (pdf)

cannot see any pdfs







Flashcard 1741168446732

Tags
#measure-theory #stochastics
Question
A measurable space (Ω, A) equipped with a measure µ, either a positive measure or a signed measure, is called a [...] and usually denoted as a triple (Ω, A, µ).
Answer
measure space


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
A measurable space (Ω, A) equipped with a measure µ, either a positive measure or a signed measure, is called a measure space and usually denoted as a triple (Ω, A, µ).

Original toplevel document (pdf)

cannot see any pdfs







#measure-theory #stochastics
If P and Q are probability measures, then P − Q is a signed measure, so we need signed measures to compare probability measures.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on


Parent (intermediate) annotation

Open it
If P and Q are probability measures, then P − Q is a signed measure, so we need signed measures to compare probability measures. If µ and ν are signed measures and a and b are real numbers, then aµ+bν is a signed measure, so the family of all signed measures on a measurable space is a vector space. The latter exp

Original toplevel document (pdf)

cannot see any pdfs




Flashcard 1741171592460

Tags
#measure-theory #stochastics
Question
If P and Q are probability measures, then P − Q is a signed measure, so we need signed measures to [...].
Answer
compare probability measures


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
If P and Q are probability measures, then P − Q is a signed measure, so we need signed measures to compare probability measures.

Original toplevel document (pdf)

cannot see any pdfs







For a computer to start running—for instance, when it is powered up or rebooted—it needs to have an initial program to run. This initial program, or bootstrap program, tends to be simple. Typically, it is stored within the computer hardware in read-only memory ( ROM) or electrically erasable programmable read-only memory ( EEPROM), known by the general term firmware. It initializes all aspects of the system, from CPU registers to device controllers to memory contents. The bootstrap program must know how to load the operating system and how to start executing that system

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Once the kernel is loaded and executing, it can start providing services to the system and its users. Some services are provided outside of the kernel, by system programs that are loaded into memory at boot time to become system processes,orsystem daemons that run the entire time the kernel is running. On UNIX, the first system process is “init,” and it starts many other daemons. Once this phase is complete, the system is fully booted, and the system waits for some event to occur. The occurrence of an event is usually signaled by an interrupt from either the hardware or the software. Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by way of the system bus. Software may trigger an interrupt by executing a special operation called a system call (also called a monitor call). When the CPU is interrupted, it stops what it is doing and immediately transfers execution to a fixed location. The fixed location usually contains the starting address where the service routine for the interrupt is located. The interrupt service routine executes; on completion, the CPU resumes the interrupted computation.

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




Interrupts are an important part of a computer architecture. Each computer design has its own interrupt mechanism, but several functions are common. The interrupt must transfer control to the appropriate interrupt service routine. The straightforward method for handling this transfer would be to invoke a generic routine to examine the interrupt information. The routine, in turn, would call the interrupt-specific handler. However, interrupts must be handled quickly. Since only a predefined number of interrupts is possible, a table of pointers to interrupt routines can be used instead to provide the necessary speed. The interrupt routine is called indirectly through the table, with no intermediate routine needed. Generally, the table of pointers is stored in low memory (the first hundred or so locations). These locations hold the addresses of the interrupt service routines for the various devices

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




The interrupt architecture must also save the address of the interrupted instruction. Many old designs simply stored the interrupt address in a fixed location or in a location indexed by the device number. More recent architectures store the return address on the system stack. If the interrupt routine needs to modify the processor state—for instance, by modifying register values—it must explicitly save the current state and then restore that state before returning. After the interrupt is serviced, the saved return address is loaded into the program counter, and the interrupted computation resumes as though the interrupt had not occurred

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

pdf

cannot see any pdfs




#has-images
这 26 条婚姻法的小常识,你结没结婚都应该知道

作者:蔡思斌

如何在婚姻家庭中保护自己权益的话题越来越受到关注,因为再美好的婚姻亦随时有可能触礁,而一旦触及便会牵涉到感情、财产、孩子等诸多问题,故了解一些婚姻法的小常识是相当有必要的。

1. 大多数人都把举办婚礼当作是自己婚姻生活的开始,殊不知只举办婚礼而没有领结婚证,却够不上法律意义上的结婚,结婚必须进行结婚登记。

以 1994 年 2 月 1 日《婚姻登记管理条例》实施为起算点,在此之后的同居已没有事实婚姻一说。

2. 同居期间买房所有权以登记为准,如二人联名的未必各得一半,要看各自出资比例的。如果后面结婚了基本上就一人一半,不再考虑原出资比例了。

3. 结婚给付彩礼是传统习俗,但未办理结婚登记是可以要求返还的。即便已经办理结婚登记的,离婚时在一定条件下亦有可能要求返还。比如登记后没有同居,比如对方是贫困家庭,倾家荡产来送彩礼造成贫困等。

4. 婚前个人财产婚后经过一定期限就转化为夫妻共同财产的规定早已失效,你就结婚五十年,人家婚前财产仍然是他自己的。除非双方特别约定,否则婚前个人财产仍属于个人财产而不会自动转化。当然,如果都混同一起无从区分那又是另外一回事了。

5. 夫妻在婚姻关系存续期间所取得的财产归夫妻双方共同所有(另有书面约定的除外),离婚时一般平均分配,有时也会考虑过错方多分。多分其实没有多分多少了,别想得太多,能多个一成已经是非常多了。想想看,对方长期出轨,与异性长期同居才多分个二成。你这一般性的过错能多分多少呢。

6. 购房属于家庭重大支出,但因为购房时间、资金来源等不同因素,一旦离婚有可能争议不断,故建议在房产证上加上自己名字,如此可将风险降到最低。

7. 夫妻共同财产不能隐瞒,若离婚后发现对方有隐藏的共同财产,另一方依然可向人民法院提起诉讼请求分割。如果是不动产分割的,是不受诉讼时效限定的。

8. 很多离婚当事人特别是女性在离婚时往往处于被动地位,主要在于其不清楚家庭的收支情况,房子、车子、存款只知道有很多但具体情况一概不知,故了解并掌握家庭财产情况十分必要,至少可以防患于未然,不至于在离婚时执行不能或被对方恶意转移。所以,平时可以长长心,配偶一方常用的银行账户、证券开户机构及用户码、公司名称、保管箱存放地点及编号等多了解一些不过分吧。

9. 很多人误以为分居两年或更长时间后便可以自动离婚。但根据我国婚姻法规定,夫妻离婚只能通过协议离婚或诉讼离婚进行,不存在自动离婚一说。想离,不能协商的,就赶快起诉吧。毕竟分居二年很多地方法院都不会直接判离的,还是连续离婚诉讼比较靠谱。

10. 一般情况下,夫妻一方向他人借的钱,由夫妻共同财产偿还。即便离婚亦不能逃脱,但如果他人知道夫妻双方已签订协议约定财产归各自所有的,则由借款一方偿还。这就考验你的智商了,这实质上很不容易的,没有债权人会这么傻瓜的吧。

11. 若一方有出轨现象的,应及时保留证据,短信、录音、录像、往来信件、日记、悔过书、保证书、通话记录清单、QQ、微信聊天记录等都可以作为证据使用。

另外,一定要抓住黄金 72 小时。一般当事人出轨被发现的,出于愧疚的心理,在事发初期对于财产、孩子抚养权、离婚问题会作出相对多的让步,如果你决定离婚的,那就速战速决。当然也不能太贪心了,激起对方反感,内疚悔恨情绪一过找了律师咨询的话那就落不了什么好了。

12. 若一方将夫妻共同财产赠与小三,该赠与行为因违背社会伦理而无效,赠与的财产可以追回。有时侯,甚至有夫妻共同配合,将送给小三财产追回的。郎教授的例子就很典型,做个小三也是不容易的。

13. 签订「忠诚协议」应就赔偿责任予以明确,若笼统使用「净身出户」有可能被认定无效。不要总想得到太多或全部,适当多一些、不太过分的忠诚协议还是比较容易获得法官支持的。

14. 家暴不仅仅是殴打、捆绑等暴力行为,精神暴力亦属于家庭暴力范畴,遇到家庭暴力应及时保留证据,派出所出警记录、双方笔录、伤情鉴定等均是很有证明力的证据。

男人打女人,这是最大的恶习。依我的办案经验,有了第一次,就绝对会有第二次,n 次。真碰到家暴的,其实第一次就可以选择离婚了,别想着对方能够悔改了。

想想,尝到靠拳头说话的快感,后面还怎么会选择用语言与你沟通哟。不服就打,简单快捷。过了这个底线,后面也就无限可言了。

15. 若一方有赌博恶习的,应当注意保留证据,如派出所出警记录、居委会调解、谈话承认录音等,避免莫名背上赌债。

16. 所谓的青春补偿费是没有法律依据的,但夫妻一方因重婚、与他人同居、实施家庭暴力、虐待家庭成员导致离婚的,无过错方是有权请求损害赔偿的。当然,如果已实际给付又是另外一回事了。

17. 公民有生育的权利,但丈夫不能强迫妻子生或不生,通过诉讼解决也不行。若妻子执意不生孩子,丈夫可通过起诉离婚实现自己的生育权,找别人结婚再生还是可以的。

18. 两周岁以下的小孩,离婚时抚养权一般归女方。而十岁以上的孩子就要考虑孩子自己的选择。孩子的年龄、性别、父母各方面条件、长辈帮忙抚养情况均是处理抚养权的考虑情形。

不过,现在民法通则开始实施了,限制民事行为能力人年龄调整为八岁,后续婚姻法司法解释也有可能作相应修改。

19. 抚养费一般是按月收入的百分之二十至三十的比例支付。这只是理论数据,不是人家月收入十万,就要支付二万的抚养费,还要考虑当地经济水平、生活水平等。

20. 夫妻之间写借条实际上是有效的,这视为特殊的财产约定。当然,还是要有实际履行借款义务证据的,如果是空对空,也是没有效力的。

21. 二个人结婚,还是要给彼此一定的空间。不要去翻看配偶的手机、电脑、钱包,这是最基本的彼此尊重,互相给对方信任感、安全感多好呢。

22. 如果一方不尊重配偶的父母,甚至连招呼都不打,要断绝往来的,这种人还是早离了算。不尊重你的父母长辈,就意味着你在他(她)心目中什么都不是,婚姻迟早要崩的。

23. 谈恋爱挺好,谈一场普普通通的恋爱更好。如果一个人在恋爱期间能穷尽花样讨好你的,那意味着在婚后不喜欢你时也有更多的方式来作你,花式虐你没商量,有时也是很恐怖的。

24. 同理,如果在恋爱期间能用自残来表明心意的,有多远离他(她)多远,伤害自己都下得了手,伤害别人更是不在话下。

25. 婚生子女可以跟父姓也可以跟母姓,看谁先下手为强了。后续要改姓的,除非双方协商一致,否则是无法修改的,即便是离婚协议中有约定可以改姓的条款也不行,派出所同样要求双方同时到场表示同意。所以,若离婚协议中有改姓条款的,最好同时履行完毕,否则对方后面反悔你就没招了。

26. 离婚案件法院在审理过程中,对于夫妻债务,除非双方共同确认或借款时有共同签字确认的,法院才会分割该债务。否则,如只是一方个人借款,另一方否认夫妻共同债务的,法院一般都是不予处理分割的,等债权人另案起诉时再处理。

来源:知乎

statusnot read reprioritisations
last reprioritisation on reading queue position [%]
started reading on finished reading on

这 26 æ¡å©šå§»æ³•çš„å°å¸¸è¯†ï¼Œä½ ç»“æ²¡ç»“å©šéƒ½åº”è¯¥çŸ¥é“ - 博海拾贝 - 萝卜网
这 26 条婚姻法的小常识,你结没结婚都应该知道 - 博海拾贝 - 萝卜网 博海拾贝 关于 联系 每日博海拾贝 萝卜网关闭公告 订阅 微博 腾讯微博 微信 诸暨 | 最优购| 烧饼博客 这 26 条婚姻法的小常识,你结没结婚都应该知道 梁萧 发布于 9小时前 分类:文摘 作者:蔡思斌 如何在婚姻家庭中保护自己权益的话题越来越受到关注,因为再美好的婚姻亦随时有可能触礁,而一旦触及便会牵涉到感情、财产、孩子等诸多问题,故了解一些婚姻法的小常识是相当有必要的。 1. 大多数人都把举办婚礼当作是自己婚姻生活的开始,殊不知只举办婚礼而没有领结婚证,却够不上法律意义上的结婚,结婚必须进行结婚登记。 以 1994 年 2 月 1 日《婚姻登记管理条例》实施为起算点,在此之后的同居已没有事实婚姻一说。 2. 同居期间买房所有权以登记为准,如二人联名的未必各得一半,要看各自出资比例的。如果后面结婚了基本上就一人一半,不再考虑原出资比例了。 3. 结婚给付彩礼是传统习俗,但未办理结婚登记是可以要求返还的。即便已经办理结婚登记的,离婚时在一定条件下亦有可能要求返还。比如登记后没有同居,比如对方是贫困家庭,倾家荡产来送彩礼造成贫困等。 4. 婚前个人财产婚后经过一定期限就转化为夫妻共同财产的规定早已失效,你就结婚五十年,人家婚前财产仍然是他自己的。除非双方特别约定,否则婚前个人财产仍属于个人财产而不会自动转化。当然,如果都混同一起无从区分那又是另外一回事了。 5. 夫妻在婚姻关系存续期间所取得的财产归夫妻双方共同所有(另有书面约定的除外),离婚时一般平均分配,有时也会考虑过错方多分。多分其实没有多分多少了,别想得太多,能多个一成已经是非常多了。想想看,对方长期出轨,与异性长期同居才多分个二成。你这一般性的过错能多分多少呢。 6. 购房属于家庭重大支出,但因为购房时间、资金来源等不同因素,一旦离婚有可能争议不断,故建议在房产证上加上自己名字,如此可将风险降到最低。 7. 夫妻共同财产不能隐瞒,若离婚后发现对方有隐藏的共同财产,另一方依然可向人民法院提起诉讼请求分割。如果是不动产分割的,是不受诉讼时效限定的。 8. 很多离婚当事人特别是女性在离婚时往往处于被动地位,主要在于其不清楚家庭的收支情况,房子、车子、存款只知道有很多但具体情况一概不知,故了解并掌握家庭财产情况十分必要,至少可以防患于未然,不至于在离婚时执行不能或被对方恶意转移。所以,平时可以长长心,配偶一方常用的银行账户、证券开户机构及用户码、公司名称、保管箱存放地点及编号等多了解一些不过分吧。 9. 很多人误以为分居两年或更长时间后便可以自动离婚。但根据我国婚姻法规定,夫妻离婚只能通过协议离婚或诉讼离婚进行,不存在自动离婚一说。想离,不能协商的,就赶快起诉吧。毕竟分居二年很多地方法院都不会直接判离的,还是连续离婚诉讼比较靠谱。 10. 一般情况下,夫妻一方向他人借的钱,由夫妻共同财产偿还。即便离婚亦不能逃脱,但如果他人知道夫妻双方已签订协议约定财产归各自所有的,则由借款一方偿还。这就考验你的智商了,这实质上很不容易的,没有债权人会这么傻瓜的吧。 11. 若一方有出轨现象的,应及时保留证据,短信、录音、录像、往来信件、日记、悔过书、保证书、通话记录清单、QQ、微信聊天记录等都可以作为证据使用。 另外,一定要抓住黄金 72 小时。一般当事人出轨被发现的,出于愧疚的心理,在事发初期对于财产、孩子抚养权、离婚问题会作出相对多的让步,如果你决定离婚的,那就速战速决。当然也不能太贪心了,激起对方反感,内疚悔恨情绪一过找了律师咨询的话那就落不了什么好了。 12. 若一方将夫妻共同财产赠与小三,该赠与行为因违背社会伦理而无效,赠与的财产可以追回。有时侯,甚至有夫妻共同配合,将送给小三财产追回的。郎教授的例子就很典型,做个小三也是不容易的。 13. 签订「忠诚协议」应就赔偿责任予以明确,若笼统使用「净身出户」有可能被认定无效。不要总想得到太多或全部,适当多一些、不太过分的忠诚协议还是比较容易获得法官支持的。 14. 家暴不仅仅是殴打、捆绑等暴力行为,精神暴力亦属于家庭暴力范畴,遇到家庭暴力应及时保留证据,派出所出警记录、双方笔录、伤情鉴定等均是很有证明力的证据。 男人打女人,这是最大的恶习。依我的办案经验,有了第一次,就绝对会有第二次,n 次。真碰到家暴的,其实第一次就可以选择离婚了,别想着对方能够悔改了。 想想,尝到靠拳头说话的快感,后面还怎么会选择用语言与你沟通哟。不服就打,简单快捷。过了这个底线,后面也就无限可言了。 15. 若一方有赌博恶习的,应当注意保留证据,如派出所出警记录、居委会调解、谈话承认录音等,避免莫名背上赌债。 16. 所谓的青春补偿费是没有法律依据的,但夫妻一方因重婚、与他人同居、实施家庭暴力、虐待家庭成员导致离婚的,无过错方是有权请求损害赔偿的。当然,如果已实际给付又是另外一回事了。 17. 公民有生育的权利,但丈夫不能强迫妻子生或不生,通过诉讼解决也不行。若妻子执意不生孩子,丈夫可通过起诉离婚实现自己的生育权,找别人结婚再生还是可以的。 18. 两周岁以下的小孩,离婚时抚养权一般归女方。而十岁以上的孩子就要考虑孩子自己的选择。孩子的年龄、性别、父母各方面条件、长辈帮忙抚养情况均是处理抚养权的考虑情形。 不过,现在民法通则开始实施了,限制民事行为能力人年龄调整为八岁,后续婚姻法司法解释也有可能作相应修改。 19. 抚养费一般是按月收入的百分之二十至三十的比例支付。这只是理论数据,不是人家月收入十万,就要支付二万的抚养费,还要考虑当地经济水平、生活水平等。 20. 夫妻之间写借条实际上是有效的,这视为特殊的财产约定。当然,还是要有实际履行借款义务证据的,如果是空对空,也是没有效力的。 21. 二个人结婚,还是要给彼此一定的空间。不要去翻看配偶的手机、电脑、钱包,这是最基本的彼此尊重,互相给对方信任感、安全感多好呢。 22. 如果一方不尊重配偶的父母,甚至连招呼都不打,要断绝往来的,这种人还是早离了算。不尊重你的父母长辈,就意味着你在他(她)心目中什么都不是,婚姻迟早要崩的。 23. 谈恋爱挺好,谈一场普普通通的恋爱更好。如果一个人在恋爱期间能穷尽花样讨好你的,那意味着在婚后不喜欢你时也有更多的方式来作你,花式虐你没商量,有时也是很恐怖的。 24. 同理,如果在恋爱期间能用自残来表明心意的,有多远离他(她)多远,伤害自己都下得了手,伤害别人更是不在话下。 25. 婚生子女可以跟父姓也可以跟母姓,看谁先下手为强了。后续要改姓的,除非双方协商一致,否则是无法修改的,即便是离婚协议中有约定可以改姓的条款也不行,派出所同样要求双方同时到场表示同意。所以,若离婚协议中有改姓条款的,最好同时履行完毕,否则对方后面反悔你就没招了。 26. 离婚案件法院在审理过程中,对于夫妻债务,除非双方共同确认或借款时有共同签字确认的,法院才会分割该债务。否则,如只是一方个人借款,另一方否认夫妻共同债务的,法院一般都是不予处理分割的,等债权人另案起诉时再处理。 来源:知乎 未经允许不得转载:博海拾贝 » 这 26 条婚姻法的小常识,你结没结婚都应该知道 标签:婚姻法爱情 相关推荐 [imagelink]结婚前应该了解对方些什么?人们最关心这 10 个问题 [imagelink]女儿的17岁愿望,是我和丈夫离婚 [imagelink]你愿意和外地人结婚吗? [imagelink]爱情这玩意,说不准的 [im