on 19-May-2018 (Sat)

Flashcard 1729607109900

Tags
#multivariate-normal-distribution
Question

In the bivariate case, the conditional mean of X1 given X2 is [...]

$$\mu_1 + \frac{\sigma_1}{\sigma_2} \rho (x_2 - \mu_2)$$

where is the correlation coefficient between X1 and X2.
Apparently both the correlation and variance should play a part!

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is where is the correlation coefficient between X 1 and X 2 .

Original toplevel document

Multivariate normal distribution - Wikipedia
mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] <span>In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14] X 1 ∣ X 2 = x 2 ∼ N ( μ 1 + σ 1 σ 2 ρ ( x 2 − μ 2 ) , ( 1 − ρ 2 ) σ 1 2 ) . {\displaystyle X_{1}\mid X_{2}=x_{2}\ \sim \ {\mathcal {N}}\left(\mu _{1}+{\frac {\sigma _{1}}{\sigma _{2}}}\rho (x_{2}-\mu _{2}),\,(1-\rho ^{2})\sigma _{1}^{2}\right).} where ρ {\displaystyle \rho } is the correlation coefficient between X 1 and X 2 . Bivariate conditional expectation[edit source] In the general case[edit source] (

Flashcard 1804562730252

Tags
#vim
Question
All find commands (character search) can be followed by [...] to go to the next searched item
; (semicolon)

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
All search commands can be followed by ; (semicolon) to go to the next searched item

Original toplevel document

A Great Vim Cheat Sheet
er the cursor F [char] - Move to the next char on the current line before the cursor t [char] - Move to before the next char on the current line after the cursor T [char] - Move to before the next char on the current line before the cursor <span>All these commands can be followed by ; (semicolon) to go to the next searched item, and , (comma) to go the the previous searched item ##Insert/Appending/Editing Text Results in insert mode i - start insert mode at cursor I - insert at the beginning of the line a - append after the cursor A -

Flashcard 1804591303948

Tags
#vim
Question
[... command line ...] - Split windows horizontally
:sp

or ctrl+ws

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
ctrl+ws - Split windows horizontally

Original toplevel document

A Great Vim Cheat Sheet
ace all old with new throughout file with confirmations ##Working with multiple files :e filename - Edit a file :tabe - Make a new tab gt - Go to the next tab gT - Go to the previous tab Advanced :vsp - vertically split windows <span>ctrl+ws - Split windows horizontally ctrl+wv - Split windows vertically ctrl+ww - switch between windows ctrl+wq - Quit a window ##Marks Marks allow you to jump to designated points in your code. m{a-z} - Set

Flashcard 2976375508236

Tags
#multivariate-normal-distribution
Question

In the bivariate case, the conditional variance of X1 given X2 is [...]

$$(1 - \rho^2 ) \sigma_1^2$$

where is the correlation coefficient between X1 and X2.
Apparently both the correlation and variance should play a part!

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is where is the correlation coefficient between X 1 and X 2 .

Original toplevel document

Multivariate normal distribution - Wikipedia
mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] <span>In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14] X 1 ∣ X 2 = x 2 ∼ N ( μ 1 + σ 1 σ 2 ρ ( x 2 − μ 2 ) , ( 1 − ρ 2 ) σ 1 2 ) . {\displaystyle X_{1}\mid X_{2}=x_{2}\ \sim \ {\mathcal {N}}\left(\mu _{1}+{\frac {\sigma _{1}}{\sigma _{2}}}\rho (x_{2}-\mu _{2}),\,(1-\rho ^{2})\sigma _{1}^{2}\right).} where ρ {\displaystyle \rho } is the correlation coefficient between X 1 and X 2 . Bivariate conditional expectation[edit source] In the general case[edit source] (

Flashcard 2976384945420

Tags
#calculus-of-variations
Question
Functionals are often expressed as [...] involving functions and their derivatives.

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
Functionals are often expressed as definite integrals involving functions and their derivatives.

Original toplevel document

Calculus of variations - Wikipedia
of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals, which are mappings from a set of functions to the real numbers. [Note 1] <span>Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations. A simple example of such a problem is to find the curve o

Annotation 2976386518284

 #calculus-of-variations Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation.[Note 6] Consider the functional $${\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.}$$ where x1, x2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′ . If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x1 and x2 , then for any number ε close to 0, $$J[f]\leq J[f+\varepsilon \eta ]\,.$$ The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε , $$\Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.$$ Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus,[Note 7] $$\Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.$$ Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, $${\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}$$ and since dy /dε = η and dy ′/dε = η' , $${\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '$$ . Therefore, {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x1...

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1

Flashcard 2976388091148

Tags
#calculus-of-variations
Question
An example of a necessary condition that is used for finding weak extrema is the [...]

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
tinuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the <span>Euler–Lagrange equation <span><body><html>

Original toplevel document

Calculus of variations - Wikipedia
or a function space of continuous functions, extrema of corresponding functionals are called weak extrema or strong extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not. [11] <span>Both strong and weak extrema of functionals are for a space of continuous functions but weak extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit source] Main article: Euler–Lagrange equation Finding the extrema of functionals is similar to finding the maxima and minima of funct

Flashcard 2976390450444

Tags
#calculus-of-variations
Question
continuous first derivatives of the functions

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
Both strong and weak extrema of functionals are for a space of continuous functions but weak extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. [12] An example of a necessary condi

Original toplevel document

Calculus of variations - Wikipedia
or a function space of continuous functions, extrema of corresponding functionals are called weak extrema or strong extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not. [11] <span>Both strong and weak extrema of functionals are for a space of continuous functions but weak extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit source] Main article: Euler–Lagrange equation Finding the extrema of functionals is similar to finding the maxima and minima of funct

Flashcard 2976392809740

Tags
#calculus-of-variations
Question

Very often the functional takes the form [...]

$${\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.}$$

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
shes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional <span>$${\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.}$$ where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1

Annotation 2976395169036

 #calculus-of-variations the arbitrary function η(x) that has at least one derivative and vanishes at the endpoints x1 and x2

Parent (intermediate) annotation

Open it
x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′ . If the functional J[y ] attains a local minimum at f , and <span>η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, $$J[f]\leq J[f+\varepsilon \eta ]\,.$$ The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1

Flashcard 2976396741900

Tags
#calculus-of-variations
Question

A key step in finding the minimum of the functional J[y ] is to treat it as [...]

a function of ε

status measured difficulty not learned 37% [default] 0

Open it

Parent (intermediate) annotation

Open it
_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.\) According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. <span>$${\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0$$ which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which ca

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1

Flashcard 2976400674060

Tags
#calculus-of-variations
Question
the Euler–Lagrange equation takes the form [...]
$${\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0$$

A differential equation (ODE in one variable case)

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
$${\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0$$ which is called the Euler–Lagrange equation.

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1

Flashcard 2976403819788

Tags
#calculus-of-variations
Question
the arbitrary function η(x) that has at least one derivative and [...]
vanishes at the endpoints x1 and x2

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
the arbitrary function η(x) that has at least one derivative and vanishes at the endpoints x 1 and x 2

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1

Annotation 2976409062668

 #item-response-theory tem response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables

Item response theory - Wikipedia
>Item response theory - Wikipedia Item response theory From Wikipedia, the free encyclopedia Jump to: navigation, search In psychometrics, item response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability tha

Flashcard 2976410635532

Tags
#item-response-theory
Question
[...] is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables
item response theory

status measured difficulty not learned 37% [default] 0

Parent (intermediate) annotation

Open it
tem response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and simil

Original toplevel document

Item response theory - Wikipedia
>Item response theory - Wikipedia Item response theory From Wikipedia, the free encyclopedia Jump to: navigation, search In psychometrics, item response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability tha

Annotation 2976429247756

 声明，转载，出处：来源：知乎 https://www.zhihu.com/question/53369195作者：奶包的大叔在中国，房价问题有一个铁律：越调控越暴涨。2018刚刚进入5月，全国各地发布的调控政策数量就已经超过了115个。仅4月份一个月，全国各种房地产调控政策合计就多达33次，25个城市与部门发布调控政策，其中海南、北京、杭州等城市发布了多次房地产相关新政策。为何要如此恐慌式的密集调控呢？因为一个残酷的现实：我们正在用全世界绝无仅有的方式来对抗经济规律。这样对抗的后果是显而易见的：一旦失败，房地产的崩溃将直接导致中国泡沫经济的崩溃；一旦成功，全人类的经济学教材都将重新改写，包括《资本论》。一个无法回避的问题是：我们是如何走到这种命悬一线、无法自拔的境地的呢？因为人类经济发展的定律：所有的经济问题都可以归结为债务问题。从宏观来看，债务问题体现在国民经济的三大部门：政府、企业、居民。其中，政府和企业部门的债务和杠杆率多年来一直都居高不下，处于高危崩溃的边缘。正是为了降低这两个部门的债务和杠杆率，才出现了上一波的加杠杆和现在这一波去杠杆的迫切需要。加谁的杠杆？加居民的杠杆。去谁的杠杆？去政府和国企的杠杆。这是一个规模宏大的债务转移过程，实现方式具有极强的中国特色（其他国家无法复制）：拉高房地产，政府涨地价，国企控盘--->银行定向放水，居民贷款，居民债务和杠杆率增加--->抬高房价，棚改货币化，实施政策组合拳--->天量资vsk入房地产，套入后宣布5年限售令，冻结流动性--->政府和国企完成去杠杆任务，楼市流通盘消失，百万亿资金被锁定--->货币超发问题被进一步拖缓。为什么会提到货币超发问题呢？这又涉及到了汇率和房价的经典问题：保汇率？还是保房价？由于中国庞大的经济体量，几乎决定了失去汇率和房价中的任何一个都会导致灾难性的后果，因此，保汇率还是保房价这个经典问题就变成了既要...又要...的特色问题。很明显，通过印钞票是无法解决这个问题的，委内瑞拉就是活生生的例子。外汇储备是汇率的基础，也是人民币的货币之锚。唯一（可能）有效的方法，只有外汇管制。然而，在2016～2017的人民币贬值、资金外逃狂潮中，中国的外汇储备从4万亿跌破了3万亿关口；随后在强力的外汇管制和一波媲美于好莱坞大片式的操作下，才扭转了人民币贬值的预期。但这一次的外汇管制，却没有像过去那样成为起死回生的灵丹妙药：虽然人民币对美元的汇率从6.9回升到了6.25，但中国的外汇储备却只回升了不到2000亿美元，而且还增加了2984亿美元的外债，再加上约3000亿美金的贸易顺差。而目前3.12万亿美元的外汇储备中，外债水平已经达到了约1.8万亿，也就是说，实际上能够用的外汇储备也就一万亿。现在，随着美元开启了新一轮的升值周期，人民币汇率也开始了七连跌，刚刚公布的4月份外汇储备比上个月减少了180亿美元。在这一波美元的升值过程中，如果人民币汇率不能维持在6.6以上，那么汇率贬值的预期将再度回来，而这个时候，我们已经没有多少弹药可以用了。一方面，阿根廷的悲剧已经为我们敲响了警钟，而在另一方面，为了防止房价崩盘，体量庞大的房地产市场却需要百万亿级的资金来维持。然而偏偏在这时，贸易战爆发了。于是在外贸、投资、内需这三驾马车中，只剩下内需这一条路了，这也就是为什么新闻上天天高喊内需升级的原因。然而颇具讽刺意味的是，房地产作为最大的内需，却挤占了几乎全部其他的内需消费空间，这时提升内需几乎成为了提升房地产的代名词。因此，才会出现许多怪诞的情景：博鳌论坛上央行刚刚表示了要进入紧缩周期，但却在10天之后宣布了降准；连kbc6n高层都一再强调了房住不炒，樊纲却提出了六个钱包。如此矛盾重重的政策背后，不仅仅反映的是贸易战的残酷，还暴露出了更大的风险：经济规律的惩罚，从来不会因个人意志而消失。（受到大家点赞和评论的鼓舞，所以决定继续写些更新的内容。）股、债、汇、房，这四个交易市场是国家金融的命脉。先说说这其中的房（地产市场）。对于一个健康的经济体来说，房产本来是不应该出现在其金融交易市场中的，然而，在房产被赋予了金融属性之后，便开始对整个金融系统产生了不可估量的巨大能量。而在这个叫做地球的星球上，只有三个国家尝试过这种毁天灭地的巨大能量：世界经济排名前三的经济体——美国、中国、日本。（事实上，许多小国也尝试过，但是由于经济体量太小，在世界范围内造成的影响有限。）美国的次贷危机和日本消失的二十年，让地球人终于深刻的体会到了这种堪比灭霸的能量所具有的毁灭能力。而现在，所有人都在担心中国现在的这个超级泡沫，究竟会给世界经济带来什么样的后遗症。股市就不说了，说多了都是泪，4000点起点论几乎就是加杠杆去库存的备忘录。只有汇市和债市，才是真正体量极为庞大的巨无霸。先说汇市，由于外汇市场的开放性（一种货币相对于另一种货币的汇率通常以显式的方式来表示：双方都同意的交换关系），因此外汇市场的交易量超过了其它所有金融交易的总和。比如，每天全球有价证券市场的交易量大约为3000亿美元，而每天外汇交易量将近6万亿美元。 因此一个国家或经济体的货币汇率往往决定了它的生死存亡。值得指出的是，在这一波人民币汇率大幅波动的背后，不止是只有中国在悄悄的操作和严防外汇外逃，美国也（一直）在进行更严密更致命的战略部署：当贸易战第一回合制裁中兴成功之后，美国完全掌握了主动权，但是美元指数却不升反降，逼迫人民币被动升值。而在美国贸易代表团来北京谈判的前几天，美元指数却突然转向开始升值。这种看似反常的异常操作手法，其中的战略布局其实从2016～2017就早已开始了，汇率和贸易战的深度结合、立体作战，配合美联储的缩表、加息，每一步看似简单平淡的操作，背后其实都暗藏了杀机。而从贸易战现有的战况来看，形势非常不容乐观。当特朗普打响贸易战第一枪时，举国沸腾，就连商务部的官员和经济学家都在强调大豆、汽车，然而，当美国发出针对中兴的制裁令之后，官员和经济学家几乎都闭嘴了，只剩下民族斗士们继续在沸腾。这时，人们才渐渐领悟到看似整天拉仇恨、天天发推特的总统，其实有着更加精明和长远的战略布局。而通过贸易战中美元指数的战略操控，逼迫人民币汇率先升后降，尤其是现在美元指数的突然提升，这其中所蕴藏的巨大杀伤力，却在举国沸腾的争辩声中被淹没和无视了，很少有人真正认识到人民币汇率后面即将到来的更大危机。一个耐人寻味的事实是，当美国代表团来中国进行第一回合谈判之后，我们却无法得知谈判的真正内容，只有在英国《金融时报》报道了美方提供的内容之后，我们才知道真正的结果其实是谈崩了；随后，媒体才发出了美方开价太高的报道。而第一回合的谈判结果却在显现：开放进口汽车市场、开放保险市场，连一直被禁止的高通-大唐合资案，也被光速通过了。嘴上喊着绝不低头，身体却真的很诚实。看到评论里有童鞋问到了日本债务的问题，所以还是把关于这部分的内容单独出来。首先来说下日本的债务问题。日本的债务大部分为国内债务，持有人是日本国民，在高福利和超低通胀率的政策环境下，日本人的储蓄率比中国人还要高，因此作为日本国债的主要持有人，日本国民抛售国债挤兑日本政府的可能性极低。同时，日本拥有全世界最大的海外资产，以及日本的高科技产业和强大的工业实力，都为其债务提供了坚实的保障。至于美国的国债，就更不用说了，全球第一的国债发行量，却几乎永远不存在债务危机的问题。事实上，在纽约曼哈顿有一个著名的债务钟，实时更新美国的公共债务总额，并显示出每个美国家庭所要负担的数额。然而，很难想象这样的债务钟会出现在中国，因为它所造成的隐形危机可能会超过债务危机本身。应该说，中国的地方政府性债务（简称地方债）危机，是这一波去杠杆的直接导火索。早在2014年，中国的地方债就已突破24万亿，其规模已经超过了德国GDP。而现在，包括地方政府债和城投债在内，中国债市总量已达到了76.01万亿元，其中地方债务的规模已达22.22万亿。更为严重的问题是地方政府的隐性债务。尤其是最近三年，地方政府借由PPP、政府购买服务、政府投资基金等方式形成了不少隐性债务，其体量已经超过甚至数倍于显性债务。不断快速膨胀的地方债让所有人都不寒而栗，2016年10月国务院发布《地方政府性债务风险应急处置预案》称：“地方政府对其举债的债务负有偿还责任，kbc6n实行不救助原则。”这也是建国以来的首次kbc6n不拖底政策，没有了kbc6n的信用担保，地方政府对卖地经济的依赖进入了更严重的恶性循环。于是近年来各个省市的卖地收入不断攀升，而火爆的卖地收入与黑洞般的地方政府债务相比，却依然显得苍白无力。事实上，从2017kbc6n财政收入的角度来看，在全国36个省市（港澳台除外）中，仅仅只有6个省市是财政盈余，盈余总额约为3万亿，而其他31个省市的财政亏损总额则突破了5万亿；这其...