Edited, memorised or added to reading queue

on 19-May-2018 (Sat)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Flashcard 1729607109900

Tags
#multivariate-normal-distribution
Question

In the bivariate case, the conditional mean of X1 given X2 is [...]

Answer
\( \mu_1 + \frac{\sigma_1}{\sigma_2} \rho (x_2 - \mu_2) \)

where is the correlation coefficient between X1 and X2.
Apparently both the correlation and variance should play a part!

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is where is the correlation coefficient between X 1 and X 2 .

Original toplevel document

Multivariate normal distribution - Wikipedia
mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] <span>In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14] X 1 ∣ X 2 = x 2 ∼ N ( μ 1 + σ 1 σ 2 ρ ( x 2 − μ 2 ) , ( 1 − ρ 2 ) σ 1 2 ) . {\displaystyle X_{1}\mid X_{2}=x_{2}\ \sim \ {\mathcal {N}}\left(\mu _{1}+{\frac {\sigma _{1}}{\sigma _{2}}}\rho (x_{2}-\mu _{2}),\,(1-\rho ^{2})\sigma _{1}^{2}\right).} where ρ {\displaystyle \rho } is the correlation coefficient between X 1 and X 2 . Bivariate conditional expectation[edit source] In the general case[edit source] (







Flashcard 1804562730252

Tags
#vim
Question
All find commands (character search) can be followed by [...] to go to the next searched item
Answer
; (semicolon)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
All search commands can be followed by ; (semicolon) to go to the next searched item

Original toplevel document

A Great Vim Cheat Sheet
er the cursor F [char] - Move to the next char on the current line before the cursor t [char] - Move to before the next char on the current line after the cursor T [char] - Move to before the next char on the current line before the cursor <span>All these commands can be followed by ; (semicolon) to go to the next searched item, and , (comma) to go the the previous searched item ##Insert/Appending/Editing Text Results in insert mode i - start insert mode at cursor I - insert at the beginning of the line a - append after the cursor A -







Flashcard 1804591303948

Tags
#vim
Question
[... command line ...] - Split windows horizontally
Answer
:sp

or ctrl+ws

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ctrl+ws - Split windows horizontally

Original toplevel document

A Great Vim Cheat Sheet
ace all old with new throughout file with confirmations ##Working with multiple files :e filename - Edit a file :tabe - Make a new tab gt - Go to the next tab gT - Go to the previous tab Advanced :vsp - vertically split windows <span>ctrl+ws - Split windows horizontally ctrl+wv - Split windows vertically ctrl+ww - switch between windows ctrl+wq - Quit a window ##Marks Marks allow you to jump to designated points in your code. m{a-z} - Set







Flashcard 2976375508236

Tags
#multivariate-normal-distribution
Question

In the bivariate case, the conditional variance of X1 given X2 is [...]

Answer
\( (1 - \rho^2 ) \sigma_1^2 \)

where is the correlation coefficient between X1 and X2.
Apparently both the correlation and variance should play a part!

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is where is the correlation coefficient between X 1 and X 2 .

Original toplevel document

Multivariate normal distribution - Wikipedia
mathbf {y} _{1}=\mathbf {x} _{1}-{\boldsymbol {\Sigma }}_{12}{\boldsymbol {\Sigma }}_{22}^{-1}\mathbf {x} _{2}} are independent. The matrix Σ 12 Σ 22 −1 is known as the matrix of regression coefficients. Bivariate case[edit source] <span>In the bivariate case where x is partitioned into X 1 and X 2 , the conditional distribution of X 1 given X 2 is [14] X 1 ∣ X 2 = x 2 ∼ N ( μ 1 + σ 1 σ 2 ρ ( x 2 − μ 2 ) , ( 1 − ρ 2 ) σ 1 2 ) . {\displaystyle X_{1}\mid X_{2}=x_{2}\ \sim \ {\mathcal {N}}\left(\mu _{1}+{\frac {\sigma _{1}}{\sigma _{2}}}\rho (x_{2}-\mu _{2}),\,(1-\rho ^{2})\sigma _{1}^{2}\right).} where ρ {\displaystyle \rho } is the correlation coefficient between X 1 and X 2 . Bivariate conditional expectation[edit source] In the general case[edit source] (







Flashcard 2976384945420

Tags
#calculus-of-variations
Question
Functionals are often expressed as [...] involving functions and their derivatives.

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Functionals are often expressed as definite integrals involving functions and their derivatives.

Original toplevel document

Calculus of variations - Wikipedia
of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals, which are mappings from a set of functions to the real numbers. [Note 1] <span>Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations. A simple example of such a problem is to find the curve o







#calculus-of-variations

Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation.[Note 6]

Consider the functional

\({\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.}\)

where

x1, x2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′ .

If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x1 and x2 , then for any number ε close to 0,

\(J[f]\leq J[f+\varepsilon \eta ]\,.\)

The term εη is called the variation of the function f and is denoted by δf . [1]

Substituting f + εη for y in the functional J[ y ] , the result is a function of ε ,

\(\Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.\)

Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus,[Note 7]

\(\Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.\)

Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not,

\({\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}\)

and since dy / = η and dy ′/ = η' ,

\({\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '\) .

Therefore,

\({\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}\)

where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x1

...
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit] Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example[edit] In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1




Flashcard 2976388091148

Tags
#calculus-of-variations
Question
An example of a necessary condition that is used for finding weak extrema is the [...]

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
tinuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the <span>Euler–Lagrange equation <span><body><html>

Original toplevel document

Calculus of variations - Wikipedia
or a function space of continuous functions, extrema of corresponding functionals are called weak extrema or strong extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not. [11] <span>Both strong and weak extrema of functionals are for a space of continuous functions but weak extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit source] Main article: Euler–Lagrange equation Finding the extrema of functionals is similar to finding the maxima and minima of funct







Flashcard 2976390450444

Tags
#calculus-of-variations
Question
weak extrema additionally requires [...].
Answer
continuous first derivatives of the functions

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
Both strong and weak extrema of functionals are for a space of continuous functions but weak extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. [12] An example of a necessary condi

Original toplevel document

Calculus of variations - Wikipedia
or a function space of continuous functions, extrema of corresponding functionals are called weak extrema or strong extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not. [11] <span>Both strong and weak extrema of functionals are for a space of continuous functions but weak extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit source] Main article: Euler–Lagrange equation Finding the extrema of functionals is similar to finding the maxima and minima of funct







Flashcard 2976392809740

Tags
#calculus-of-variations
Question

Very often the functional takes the form [...]

Answer
\({\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.}\)

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
shes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional <span>\({\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.}\) where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit] Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example[edit] In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1







#calculus-of-variations
the arbitrary function η(x) that has at least one derivative and vanishes at the endpoints x1 and x2
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′ . If the functional J[y ] attains a local minimum at f , and <span>η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, \(J[f]\leq J[f+\varepsilon \eta ]\,.\) The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit] Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example[edit] In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1




Flashcard 2976396741900

Tags
#calculus-of-variations
Question

A key step in finding the minimum of the functional J[y ] is to treat it as [...]

Answer
a function of ε

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the endpoints x 1 and x 2 , then for any number ε close to 0, \(J[f]\leq J[f+\varepsilon \eta ]\,.\) The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , <span>the result is a function of ε , \(\Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.\) Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] \(\Phi '(0)\equi

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit] Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example[edit] In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1







#calculus-of-variations
\({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\)

which is called the Euler–Lagrange equation.

statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on


Parent (intermediate) annotation

Open it
_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.\) According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. <span>\({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\) which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which ca

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit] Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example[edit] In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1




Flashcard 2976400674060

Tags
#calculus-of-variations
Question
the Euler–Lagrange equation takes the form [...]
Answer
\({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\)


A differential equation (ODE in one variable case)


statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
\({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\) which is called the Euler–Lagrange equation.

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit] Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example[edit] In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1







Flashcard 2976403819788

Tags
#calculus-of-variations
Question
the arbitrary function η(x) that has at least one derivative and [...]
Answer
vanishes at the endpoints x1 and x2

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
the arbitrary function η(x) that has at least one derivative and vanishes at the endpoints x 1 and x 2

Original toplevel document

Calculus of variations - Wikipedia
ore difficult than finding weak extrema. [12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation. [13] [Note 5] Euler–Lagrange equation[edit] Main article: Euler–Lagrange equation <span>Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions where the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation. [Note 6] Consider the functional J [ y ] = ∫ x 1 x 2 L ( x , y ( x ) , y ′ ( x ) ) d x . {\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L(x,y(x),y'(x))\,dx\,.} where x 1 , x 2 are constants, y (x) is twice continuously differentiable, y ′(x) = dy / dx , L(x, y (x), y ′(x)) is twice continuously differentiable with respect to its arguments x, y, y ′. If the functional J[y ] attains a local minimum at f , and η(x) is an arbitrary function that has at least one derivative and vanishes at the endpoints x 1 and x 2 , then for any number ε close to 0, J [ f ] ≤ J [ f + ε η ] . {\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.} The term εη is called the variation of the function f and is denoted by δf . [1] Substituting f + εη for y in the functional J[ y ] , the result is a function of ε, Φ ( ε ) = J [ f + ε η ] . {\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.} Since the functional J[ y ] has a minimum for y = f , the function Φ(ε) has a minimum at ε = 0 and thus, [Note 7] Φ ′ ( 0 ) ≡ d Φ d ε | ε = 0 = ∫ x 1 x 2 d L d ε | ε = 0 d x = 0 . {\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.} Taking the total derivative of L[x, y, y ′] , where y = f + ε η and y ′ = f ′ + ε η′ are functions of ε but x is not, d L d ε = ∂ L ∂ y d y d ε + ∂ L ∂ y ′ d y ′ d ε {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}} and since dy /dε = η and dy ′/dε = η' , d L d ε = ∂ L ∂ y η + ∂ L ∂ y ′ η ′ {\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '} . Therefore, ∫ x 1 x 2 d L d ε | ε = 0 d x = ∫ x 1 x 2 ( ∂ L ∂ f η + ∂ L ∂ f ′ η ′ ) d x = ∫ x 1 x 2 ( ∂ L ∂ f η − η d d x ∂ L ∂ f ′ ) d x + ∂ L ∂ f ′ η | x 1 x 2 {\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}\\\end{aligned}}} where L[x, y, y ′] → L[x, f, f ′] when ε = 0 and we have used integration by parts. The last term vanishes because η = 0 at x 1 and x 2 by definition. Also, as previously mentioned the left side of the equation is zero so that ∫ x 1 x 2 η ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) d x = 0 . {\displaystyle \int _{x_{1}}^{x_{2}}\eta \left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.} According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ∂ L ∂ f − d d x ∂ L ∂ f ′ = 0 {\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0} which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of J[f] and is denoted δJ/δf(x) . In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function f(x) . The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum J[f]. A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example[edit] In order to illustrate this process, consider the problem of finding the extremal function y = f (x) , which is the shortest curve that connects two points (x 1 , y 1







#item-response-theory
tem response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

Item response theory - Wikipedia
>Item response theory - Wikipedia Item response theory From Wikipedia, the free encyclopedia Jump to: navigation, search In psychometrics, item response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability tha




Flashcard 2976410635532

Tags
#item-response-theory
Question
[...] is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables
Answer
item response theory

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
tem response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and simil

Original toplevel document

Item response theory - Wikipedia
>Item response theory - Wikipedia Item response theory From Wikipedia, the free encyclopedia Jump to: navigation, search In psychometrics, item response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability tha







声明,转载,出处:
来源:知乎 https://www.zhihu.com/question/53369195
作者:奶包的大叔




在中国,房价问题有一个铁律:越调控越暴涨。
2018刚刚进入5月,全国各地发布的调控政策数量就已经超过了115个。仅4月份一个月,全国各种房地产调控政策合计就多达33次,25个城市与部门发布调控政策,其中海南、北京、杭州等城市发布了多次房地产相关新政策。
为何要如此恐慌式的密集调控呢?因为一个残酷的现实:我们正在用全世界绝无仅有的方式来对抗经济规律。
这样对抗的后果是显而易见的:一旦失败,房地产的崩溃将直接导致中国泡沫经济的崩溃;一旦成功,全人类的经济学教材都将重新改写,包括《资本论》。
一个无法回避的问题是:我们是如何走到这种命悬一线、无法自拔的境地的呢?
因为人类经济发展的定律:所有的经济问题都可以归结为债务问题。从宏观来看,债务问题体现在国民经济的三大部门:政府、企业、居民。其中,政府和企业部门的债务和杠杆率多年来一直都居高不下,处于高危崩溃的边缘。正是为了降低这两个部门的债务和杠杆率,才出现了上一波的加杠杆和现在这一波去杠杆的迫切需要。加谁的杠杆?加居民的杠杆。去谁的杠杆?去政府和国企的杠杆。这是一个规模宏大的债务转移过程,实现方式具有极强的中国特色(其他国家无法复制):
拉高房地产,政府涨地价,国企控盘--->银行定向放水,居民贷款,居民债务和杠杆率增加--->抬高房价,棚改货币化,实施政策组合拳--->天量资vsk入房地产,套入后宣布5年限售令,冻结流动性--->政府和国企完成去杠杆任务,楼市流通盘消失,百万亿资金被锁定--->货币超发问题被进一步拖缓。
为什么会提到货币超发问题呢?这又涉及到了汇率和房价的经典问题:保汇率?还是保房价?由于中国庞大的经济体量,几乎决定了失去汇率和房价中的任何一个都会导致灾难性的后果,因此,保汇率还是保房价这个经典问题就变成了既要...又要...的特色问题。

很明显,通过印钞票是无法解决这个问题的,委内瑞拉就是活生生的例子。外汇储备是汇率的基础,也是人民币的货币之锚。唯一(可能)有效的方法,只有外汇管制。然而,在2016~2017的人民币贬值、资金外逃狂潮中,中国的外汇储备从4万亿跌破了3万亿关口;随后在强力的外汇管制和一波媲美于好莱坞大片式的操作下,才扭转了人民币贬值的预期。但这一次的外汇管制,却没有像过去那样成为起死回生的灵丹妙药:虽然人民币对美元的汇率从6.9回升到了6.25,但中国的外汇储备却只回升了不到2000亿美元,而且还增加了2984亿美元的外债,再加上约3000亿美金的贸易顺差。而目前3.12万亿美元的外汇储备中,外债水平已经达到了约1.8万亿,也就是说,实际上能够用的外汇储备也就一万亿。现在,随着美元开启了新一轮的升值周期,人民币汇率也开始了七连跌,刚刚公布的4月份外汇储备比上个月减少了180亿美元。在这一波美元的升值过程中,如果人民币汇率不能维持在6.6以上,那么汇率贬值的预期将再度回来,而这个时候,我们已经没有多少弹药可以用了。

一方面,阿根廷的悲剧已经为我们敲响了警钟,而在另一方面,为了防止房价崩盘,体量庞大的房地产市场却需要百万亿级的资金来维持。然而偏偏在这时,贸易战爆发了。

于是在外贸、投资、内需这三驾马车中,只剩下内需这一条路了,这也就是为什么新闻上天天高喊内需升级的原因。然而颇具讽刺意味的是,房地产作为最大的内需,却挤占了几乎全部其他的内需消费空间,这时提升内需几乎成为了提升房地产的代名词。因此,才会出现许多怪诞的情景:博鳌论坛上央行刚刚表示了要进入紧缩周期,但却在10天之后宣布了降准;连kbc6n高层都一再强调了房住不炒,樊纲却提出了六个钱包。

如此矛盾重重的政策背后,不仅仅反映的是贸易战的残酷,还暴露出了更大的风险:经济规律的惩罚,从来不会因个人意志而消失。

(受到大家点赞和评论的鼓舞,所以决定继续写些更新的内容。)

股、债、汇、房,这四个交易市场是国家金融的命脉。

先说说这其中的房(地产市场)。对于一个健康的经济体来说,房产本来是不应该出现在其金融交易市场中的,然而,在房产被赋予了金融属性之后,便开始对整个金融系统产生了不可估量的巨大能量。而在这个叫做地球的星球上,只有三个国家尝试过这种毁天灭地的巨大能量:世界经济排名前三的经济体——美国、中国、日本。(事实上,许多小国也尝试过,但是由于经济体量太小,在世界范围内造成的影响有限。)

美国的次贷危机和日本消失的二十年,让地球人终于深刻的体会到了这种堪比灭霸的能量所具有的毁灭能力。而现在,所有人都在担心中国现在的这个超级泡沫,究竟会给世界经济带来什么样的后遗症。

股市就不说了,说多了都是泪,4000点起点论几乎就是加杠杆去库存的备忘录。

只有汇市和债市,才是真正体量极为庞大的巨无霸。

先说汇市,由于外汇市场的开放性(一种货币相对于另一种货币的汇率通常以显式的方式来表示:双方都同意的交换关系),因此外汇市场的交易量超过了其它所有金融交易的总和。比如,每天全球有价证券市场的交易量大约为3000亿美元,而每天外汇交易量将近6万亿美元。 因此一个国家或经济体的货币汇率往往决定了它的生死存亡。

值得指出的是,在这一波人民币汇率大幅波动的背后,不止是只有中国在悄悄的操作和严防外汇外逃,美国也(一直)在进行更严密更致命的战略部署:当贸易战第一回合制裁中兴成功之后,美国完全掌握了主动权,但是美元指数却不升反降,逼迫人民币被动升值。而在美国贸易代表团来北京谈判的前几天,美元指数却突然转向开始升值。这种看似反常的异常操作手法,其中的战略布局其实从2016~2017就早已开始了,汇率和贸易战的深度结合、立体作战,配合美联储的缩表、加息,每一步看似简单平淡的操作,背后其实都暗藏了杀机。

而从贸易战现有的战况来看,形势非常不容乐观。当特朗普打响贸易战第一枪时,举国沸腾,就连商务部的官员和经济学家都在强调大豆、汽车,然而,当美国发出针对中兴的制裁令之后,官员和经济学家几乎都闭嘴了,只剩下民族斗士们继续在沸腾。这时,人们才渐渐领悟到看似整天拉仇恨、天天发推特的总统,其实有着更加精明和长远的战略布局。而通过贸易战中美元指数的战略操控,逼迫人民币汇率先升后降,尤其是现在美元指数的突然提升,这其中所蕴藏的巨大杀伤力,却在举国沸腾的争辩声中被淹没和无视了,很少有人真正认识到人民币汇率后面即将到来的更大危机。

一个耐人寻味的事实是,当美国代表团来中国进行第一回合谈判之后,我们却无法得知谈判的真正内容,只有在英国《金融时报》报道了美方提供的内容之后,我们才知道真正的结果其实是谈崩了;随后,媒体才发出了美方开价太高的报道。而第一回合的谈判结果却在显现:开放进口汽车市场、开放保险市场,连一直被禁止的高通-大唐合资案,也被光速通过了。嘴上喊着绝不低头,身体却真的很诚实。

看到评论里有童鞋问到了日本债务的问题,所以还是把关于这部分的内容单独出来。

首先来说下日本的债务问题。日本的债务大部分为国内债务,持有人是日本国民,在高福利和超低通胀率的政策环境下,日本人的储蓄率比中国人还要高,因此作为日本国债的主要持有人,日本国民抛售国债挤兑日本政府的可能性极低。同时,日本拥有全世界最大的海外资产,以及日本的高科技产业和强大的工业实力,都为其债务提供了坚实的保障。

至于美国的国债,就更不用说了,全球第一的国债发行量,却几乎永远不存在债务危机的问题。事实上,在纽约曼哈顿有一个著名的债务钟,实时更新美国的公共债务总额,并显示出每个美国家庭所要负担的数额。

然而,很难想象这样的债务钟会出现在中国,因为它所造成的隐形危机可能会超过债务危机本身。应该说,中国的地方政府性债务(简称地方债)危机,是这一波去杠杆的直接导火索。早在2014年,中国的地方债就已突破24万亿,其规模已经超过了德国GDP。而现在,包括地方政府债和城投债在内,中国债市总量已达到了76.01万亿元,其中地方债务的规模已达22.22万亿。

更为严重的问题是地方政府的隐性债务。尤其是最近三年,地方政府借由PPP、政府购买服务、政府投资基金等方式形成了不少隐性债务,其体量已经超过甚至数倍于显性债务。不断快速膨胀的地方债让所有人都不寒而栗,2016年10月国务院发布《地方政府性债务风险应急处置预案》称:“地方政府对其举债的债务负有偿还责任,kbc6n实行不救助原则。”

这也是建国以来的首次kbc6n不拖底政策,没有了kbc6n的信用担保,地方政府对卖地经济的依赖进入了更严重的恶性循环。于是近年来各个省市的卖地收入不断攀升,而火爆的卖地收入与黑洞般的地方政府债务相比,却依然显得苍白无力。事实上,从2017kbc6n财政收入的角度来看,在全国36个省市(港澳台除外)中,仅仅只有6个省市是财政盈余,盈余总额约为3万亿,而其他31个省市的财政亏损总额则突破了5万亿;这其...
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

ת£º2018Öйú·¿¼Û±©ÕÇÓëÖÐÃÀóÒ×Õ½Éî²ã¹Øϵ - ¼¼ÐgӑՓ…^ | ²ÝÁñÉç…^ - t66y.com
男人的天堂,女神的主场 点进去才明白人生有多爽!在这里,有着道不尽的爱恨情仇,在这里,有着理不清的风花雪月,深夜买醉 孤独好累,激情澎湃不再是年轻人的专利,聚焦双眼,燃烧灵魂,一个让你欲罢不能的神奇直播www.tvyc.info BBIN仅此一家【凯旋门娱乐城】12年老品牌 凯旋门娱乐城 全年无休、资金安全、六大真人视讯齐聚各国美女荷官发牌、百家乐、龙虎斗、牛牛、捕鱼大师、拥有12大电子平台、上千款老虎机游戏、1元即可博得千万奖池彩金、欢迎您的加入!www.457.com <span>声明,转载,出处:来源:知乎 https://www.zhihu.com/question/53369195作者:奶包的大叔在中国,房价问题有一个铁律:越调控越暴涨。2018刚刚进入5月,全国各地发布的调控政策数量就已经超过了115个。仅4月份一个月,全国各种房地产调控政策合计就多达33次,25个城市与部门发布调控政策,其中海南、北京、杭州等城市发布了多次房地产相关新政策。为何要如此恐慌式的密集调控呢?因为一个残酷的现实:我们正在用全世界绝无仅有的方式来对抗经济规律。这样对抗的后果是显而易见的:一旦失败,房地产的崩溃将直接导致中国泡沫经济的崩溃;一旦成功,全人类的经济学教材都将重新改写,包括《资本论》。一个无法回避的问题是:我们是如何走到这种命悬一线、无法自拔的境地的呢?因为人类经济发展的定律:所有的经济问题都可以归结为债务问题。从宏观来看,债务问题体现在国民经济的三大部门:政府、企业、居民。其中,政府和企业部门的债务和杠杆率多年来一直都居高不下,处于高危崩溃的边缘。正是为了降低这两个部门的债务和杠杆率,才出现了上一波的加杠杆和现在这一波去杠杆的迫切需要。加谁的杠杆?加居民的杠杆。去谁的杠杆?去政府和国企的杠杆。这是一个规模宏大的债务转移过程,实现方式具有极强的中国特色(其他国家无法复制):拉高房地产,政府涨地价,国企控盘--->银行定向放水,居民贷款,居民债务和杠杆率增加--->抬高房价,棚改货币化,实施政策组合拳--->天量资vsk入房地产,套入后宣布5年限售令,冻结流动性--->政府和国企完成去杠杆任务,楼市流通盘消失,百万亿资金被锁定--->货币超发问题被进一步拖缓。为什么会提到货币超发问题呢?这又涉及到了汇率和房价的经典问题:保汇率?还是保房价?由于中国庞大的经济体量,几乎决定了失去汇率和房价中的任何一个都会导致灾难性的后果,因此,保汇率还是保房价这个经典问题就变成了既要...又要...的特色问题。很明显,通过印钞票是无法解决这个问题的,委内瑞拉就是活生生的例子。外汇储备是汇率的基础,也是人民币的货币之锚。唯一(可能)有效的方法,只有外汇管制。然而,在2016~2017的人民币贬值、资金外逃狂潮中,中国的外汇储备从4万亿跌破了3万亿关口;随后在强力的外汇管制和一波媲美于好莱坞大片式的操作下,才扭转了人民币贬值的预期。但这一次的外汇管制,却没有像过去那样成为起死回生的灵丹妙药:虽然人民币对美元的汇率从6.9回升到了6.25,但中国的外汇储备却只回升了不到2000亿美元,而且还增加了2984亿美元的外债,再加上约3000亿美金的贸易顺差。而目前3.12万亿美元的外汇储备中,外债水平已经达到了约1.8万亿,也就是说,实际上能够用的外汇储备也就一万亿。现在,随着美元开启了新一轮的升值周期,人民币汇率也开始了七连跌,刚刚公布的4月份外汇储备比上个月减少了180亿美元。在这一波美元的升值过程中,如果人民币汇率不能维持在6.6以上,那么汇率贬值的预期将再度回来,而这个时候,我们已经没有多少弹药可以用了。一方面,阿根廷的悲剧已经为我们敲响了警钟,而在另一方面,为了防止房价崩盘,体量庞大的房地产市场却需要百万亿级的资金来维持。然而偏偏在这时,贸易战爆发了。于是在外贸、投资、内需这三驾马车中,只剩下内需这一条路了,这也就是为什么新闻上天天高喊内需升级的原因。然而颇具讽刺意味的是,房地产作为最大的内需,却挤占了几乎全部其他的内需消费空间,这时提升内需几乎成为了提升房地产的代名词。因此,才会出现许多怪诞的情景:博鳌论坛上央行刚刚表示了要进入紧缩周期,但却在10天之后宣布了降准;连kbc6n高层都一再强调了房住不炒,樊纲却提出了六个钱包。如此矛盾重重的政策背后,不仅仅反映的是贸易战的残酷,还暴露出了更大的风险:经济规律的惩罚,从来不会因个人意志而消失。(受到大家点赞和评论的鼓舞,所以决定继续写些更新的内容。)股、债、汇、房,这四个交易市场是国家金融的命脉。先说说这其中的房(地产市场)。对于一个健康的经济体来说,房产本来是不应该出现在其金融交易市场中的,然而,在房产被赋予了金融属性之后,便开始对整个金融系统产生了不可估量的巨大能量。而在这个叫做地球的星球上,只有三个国家尝试过这种毁天灭地的巨大能量:世界经济排名前三的经济体——美国、中国、日本。(事实上,许多小国也尝试过,但是由于经济体量太小,在世界范围内造成的影响有限。)美国的次贷危机和日本消失的二十年,让地球人终于深刻的体会到了这种堪比灭霸的能量所具有的毁灭能力。而现在,所有人都在担心中国现在的这个超级泡沫,究竟会给世界经济带来什么样的后遗症。股市就不说了,说多了都是泪,4000点起点论几乎就是加杠杆去库存的备忘录。只有汇市和债市,才是真正体量极为庞大的巨无霸。先说汇市,由于外汇市场的开放性(一种货币相对于另一种货币的汇率通常以显式的方式来表示:双方都同意的交换关系),因此外汇市场的交易量超过了其它所有金融交易的总和。比如,每天全球有价证券市场的交易量大约为3000亿美元,而每天外汇交易量将近6万亿美元。 因此一个国家或经济体的货币汇率往往决定了它的生死存亡。值得指出的是,在这一波人民币汇率大幅波动的背后,不止是只有中国在悄悄的操作和严防外汇外逃,美国也(一直)在进行更严密更致命的战略部署:当贸易战第一回合制裁中兴成功之后,美国完全掌握了主动权,但是美元指数却不升反降,逼迫人民币被动升值。而在美国贸易代表团来北京谈判的前几天,美元指数却突然转向开始升值。这种看似反常的异常操作手法,其中的战略布局其实从2016~2017就早已开始了,汇率和贸易战的深度结合、立体作战,配合美联储的缩表、加息,每一步看似简单平淡的操作,背后其实都暗藏了杀机。而从贸易战现有的战况来看,形势非常不容乐观。当特朗普打响贸易战第一枪时,举国沸腾,就连商务部的官员和经济学家都在强调大豆、汽车,然而,当美国发出针对中兴的制裁令之后,官员和经济学家几乎都闭嘴了,只剩下民族斗士们继续在沸腾。这时,人们才渐渐领悟到看似整天拉仇恨、天天发推特的总统,其实有着更加精明和长远的战略布局。而通过贸易战中美元指数的战略操控,逼迫人民币汇率先升后降,尤其是现在美元指数的突然提升,这其中所蕴藏的巨大杀伤力,却在举国沸腾的争辩声中被淹没和无视了,很少有人真正认识到人民币汇率后面即将到来的更大危机。一个耐人寻味的事实是,当美国代表团来中国进行第一回合谈判之后,我们却无法得知谈判的真正内容,只有在英国《金融时报》报道了美方提供的内容之后,我们才知道真正的结果其实是谈崩了;随后,媒体才发出了美方开价太高的报道。而第一回合的谈判结果却在显现:开放进口汽车市场、开放保险市场,连一直被禁止的高通-大唐合资案,也被光速通过了。嘴上喊着绝不低头,身体却真的很诚实。看到评论里有童鞋问到了日本债务的问题,所以还是把关于这部分的内容单独出来。首先来说下日本的债务问题。日本的债务大部分为国内债务,持有人是日本国民,在高福利和超低通胀率的政策环境下,日本人的储蓄率比中国人还要高,因此作为日本国债的主要持有人,日本国民抛售国债挤兑日本政府的可能性极低。同时,日本拥有全世界最大的海外资产,以及日本的高科技产业和强大的工业实力,都为其债务提供了坚实的保障。至于美国的国债,就更不用说了,全球第一的国债发行量,却几乎永远不存在债务危机的问题。事实上,在纽约曼哈顿有一个著名的债务钟,实时更新美国的公共债务总额,并显示出每个美国家庭所要负担的数额。然而,很难想象这样的债务钟会出现在中国,因为它所造成的隐形危机可能会超过债务危机本身。应该说,中国的地方政府性债务(简称地方债)危机,是这一波去杠杆的直接导火索。早在2014年,中国的地方债就已突破24万亿,其规模已经超过了德国GDP。而现在,包括地方政府债和城投债在内,中国债市总量已达到了76.01万亿元,其中地方债务的规模已达22.22万亿。更为严重的问题是地方政府的隐性债务。尤其是最近三年,地方政府借由PPP、政府购买服务、政府投资基金等方式形成了不少隐性债务,其体量已经超过甚至数倍于显性债务。不断快速膨胀的地方债让所有人都不寒而栗,2016年10月国务院发布《地方政府性债务风险应急处置预案》称:“地方政府对其举债的债务负有偿还责任,kbc6n实行不救助原则。”这也是建国以来的首次kbc6n不拖底政策,没有了kbc6n的信用担保,地方政府对卖地经济的依赖进入了更严重的恶性循环。于是近年来各个省市的卖地收入不断攀升,而火爆的卖地收入与黑洞般的地方政府债务相比,却依然显得苍白无力。事实上,从2017kbc6n财政收入的角度来看,在全国36个省市(港澳台除外)中,仅仅只有6个省市是财政盈余,盈余总额约为3万亿,而其他31个省市的财政亏损总额则突破了5万亿;这其中的财政缺口已高达2万亿。在长期连续疯长之后,目前的地方债规模已经达到了非常危险的22万亿之巨。2018年3月财政部在《关于做好2018年地方政府债务管理工作的通知》中设置了地方债总额天花板,2018年限额约为21万亿。4月,以新疆为代表的多个省市的PPP项目被叫停。第一枪,似乎已经打响了。还有一类负债率也颇具深意。在所有一级行业中,负债率第一的是银行,第二的是非银金融,那么负债率第三的行业呢?是房地产。截止2018年4月,房地产公司的整体负债率已达79%,创下近十三年来新高;其中近20家公司资产负债率超过85%,而万科84%的负债率也创下该公司的历史新高。其实,面对目前已经涨得连亲妈都不认识了的高房价,房地产商也很焦灼,尤其是在恒大、万达的海外投资事件之后,在杀猴王骇猴式的强监管之下,资产转移的路径被严重限制。而这些高负债率的人民币资产,面临着随时被清盘的风险。2018的房地产市场,可以说是乱象环生,很多奇观甚至可以让《资本论》中有关生产关系、生产资源配置、市场价格等内容统统失效,如果作者能穿越时空到现在,恐怕很多内容都要重写。作者在撰写资本论的过程中一定研究过荷兰郁金香的炒作事件,然而,400多年前任何一个炒成疯魔的郁金香球茎(当时的一颗名贵郁金香球茎可以换取荷兰任何地区的一栋房产,豪华装修+永久产权),恐怕都不足以让作者理解在今天的北京或上海,人们眼中的欲望和焦虑。你真的觉得可以理解这种程度的生产资源错配?2018年,限购、排队、摇号、上调贷款利率、抢人大战等等一连串看似混乱的景象,其实都是在传递着一个清晰的信号:市场禁入。这个信号来自于最高层,充满了呵护和善意:现在的房地产市场,只许进,不许出。为什么?因为目前的风险经过连续不断的积累,已经变得极不稳定和高度危险。在此,先说一个可以拍成开心麻花式喜剧片的真实“中国地产难民事件”:2018年4月22日21点,对于海南乃至整个中国二十年的楼市来说,都是一个历史性时刻:海南出台了针对全省的限购条例,号称可以秒杀之前所有调控:非本省户籍五年社保、首付七成、限售五年。瞬间,7000亿炒房资金被套。同时,海南全省的房产经纪人纷纷准备迁徙去北海和云南,史称中国地产行业难民事件。这个明明要做国际自由贸易港的“中国的佛罗里达”,却不能自由买卖房子;自由?贸易?不存在的。那确定还能叫国际自由贸易港么?用海南省长的话说就是壮士断腕,用股市专家的话说就是打消流通盘。“姐姐,今夜我不关心人类,我只想你”。海子是谁?几十年来,最让我们关心的,只有房价。而在家庭资产中房产占比已经高达70%以上的今天(一线城市占比已达85%),几乎谁都无法接受房价下跌,负资产是会吃人的,它吃过日本人、吃过香港人,但还没吃过中国大陆人。谁都希望能享受一把“睡后经济”,躺着睡着了就能让房价上涨把钱给赚了。然而,真的是这样么?到底有多少人能赚到这个钱呢?十六年来,北上广深的房价涨了20倍,但全国的GDP上涨了只有不到7倍。目前中国的楼市总市值已经超过了430万亿,是GDP的5倍(超过全球平均值的一倍),是广义货币供应量M2的2.5倍,是储备货币的15倍,是货币发行的45倍。毫不夸张的说,卖掉北上广深四大城市,差不多能买下美国。如此高的总市值,一旦有炒房人和早期的投资客套现离场,那么整个房地产立马就会崩溃:430万亿总市值,哪怕只有5%的抛售离场,那就是21.5万亿的资金流出,几乎相当于我们所有的外汇储备。这就是新闻中密集出现的字眼,严防金融系统性风险出现。如何严防呢?直接冻结流动性,只许进,不许出。在股市中,没有卖出去的价格永远都不叫真正的盈利,而房产的套现比股票更难。你确定你能够成为那5%套现离场的吗?看到评论区里有童鞋提到了房产税和如何保值投资的问题,可以说是很有代表性的问题。先说下房产税,这个一致被评为中国房价的终极大杀器,却一直处于纸上谈兵的状态。为什么呢?因为它真的是个大杀器——在杀掉高房价之前,可能连一些人都被杀了。一个可以参考的例子是韩国。从卢武铉2005 年制定了《综合不动产税法》开始,韩国政府用了5年时间、先后将三位前总统送进了监狱。尽管如此,此举依然引来了富人的愤怒,并最终导致卢武铉的*河蟹*在首都圈被弃。随后,经过李明博*河蟹*路线,直至2017 年的文在寅(曾在卢武铉手下担任青瓦台秘书室长),韩国的执政路线才重回*河蟹*。而文在寅的“8·2 房地产新政”,直接规定对第二套房的交易征税至 50%,而第三套房的交易可征至 60%。如此强悍的房产调控政策,也让韩剧中看似天价的首尔地区房价还不到北京的三分之一。同样是在东亚,同样是房产调控,但房价下降的却是别人家的房价。那么,如此血腥的房产税或调控政策就真的无法在中国实现么?非也。中国确实出台过一剑封喉的房产税,但那是在66年前——1948~1949年新中国成立的前夜,当时14%的高额房产税都无法压制房价的暴涨。于是在1952年,中国国土建设部直接出台了征收房产交易利得税200%的核武器,瞬间打爆了爆炒房价的投机商,成功平抑了房价。所以,历史教会了我们,戒急用忍,其实都是有原因的。对于大家比较关注的资产保值,这就属于仁者见仁的问题,因为在目前的大环境下,就连智者都不一定能见智。但有一个问题,是所有人都知道答案的:430万亿市值的宇宙最高房价,能毁掉的是什么。工行董事长在最近演讲时表示,2010~2017年的居民储蓄存款增长与可支配收入之比从25.4%下降至12.7%,降幅达到了一倍;而与此同时,居民家庭债务占GDP的比重已升至49%,几乎占了GDP的一半。这意味着什么呢?意味着空心化。这种空心化不仅仅体现在实体经济的倒闭潮、外资企业的撤离外逃上,而且还实实在在的发生在每个中国人的心里:这样的生活到底为何?佛性生活?低生育?就连一线城市中许多人也只能住着一千万的房子过着穷日子,号称中产杀手的学区房能够轻松掏空一个中产家庭的钱包。严重的焦虑感让许多奋斗青年都戒掉了梦想。那么,国贸双井和徐家汇这些地段的房价是如何从一百多万涨到几千万的呢?很多童鞋都曾认为房地产业属于实业和基建投资,然而,当房产被赋予了金融属性之后,一切都变了。这时,房价就成了一个不折不扣的金融游戏。5月刚刚出台的史上最严资管新政,其实可以很好的演示房地产的原罪。对于目前高达100万亿的资管通道,其主要流向只有三个:股市、债市、房地产。这时,股市第一个站出来说这个锅坚决不背,为什么?看看现在还在痛苦磨底中的A股就知道了,毫无任何量能可言,所以很明显,没有太多新增资金流入股市。债市更惨,被严打的几乎都要产生危机了,大幅流入资金根本不可能。这时,房地产低头不语,过了许久才用闪着泪光的决定说到:怪我咯?简单来说,房价上涨的金融游戏是这样:居民通过存款、购买理财将钱交给银行和机构--->银行和机构将钱贷给房产商--->房产商拿到资金后高价买地--->房产商将产品高价卖给居民--->更多的资金通过资管通道流入--->房产商拿到更多的贷款--->政府以更高的地价拍卖给房产商--->房产商以更高的房价卖给居民在这个正向循环的金融链条中,可以很清楚的看出羊毛出在谁身上。那么,这种拿你手里的钱变成混凝土砖块再翻倍高价卖给你的游戏应该叫什么呢?房产货币化?庞氏骗局?抢钱?不,这种政治不正确的说法我们不能用。但国进民退的进程却是实实在在的发生着,而且日益恶化。不要小看5月资管新规,它很可能会进一步加快房地产市场的集中化,提升龙头房地产企业的估值。然而,这种操作手法是否存在风险呢?有可能。如果在资管新规的贯彻执行过程中,不小心发生了连锁反应、刺破了债务泡沫,那么整个货币体系将面临大幅收缩。在这种极端情况下,所有的资产短期都会下跌,包括房地产。也就是说,一旦资管新规玩砸了,那么在关键时刻出来托底的大招就是债务货币化(中国版QE)了;而一旦选择这个货币化的大招,那么到时候的人民币也就不是现在的人民币了。那么,在这一波最新的操作中会存在什么风险呢?通过目前实体经济的一些真实情况,我们可以看出一些端倪。仅在4月份,全国出现违约的债券就达到了16只,涉及金额高达130亿元。比起往年,这个速度已经密集到令人害怕。与此同时,近期A股也被搅得满目疮痍,出现了大量上市公司因各种不同名目的逾期,涉及到股东股份被冻结或正在被司法拍卖。尤其是盾安集团所爆出的450亿元债务危机——因为一只融资规模只有12亿元的债券发行失败,成为压垮骆驼的最后一根稻草,导致这家中国500强企业资金链断裂。而这只是冰山一角。为什么短短几个月内,就出现了如此密集的违约潮呢?因为2018年迎来了公司债的偿还高峰期,2018年的偿债规模是2016年以前的4.3倍。悄无声息中,2018年的企业债务似乎正在排队炸雷。一场违约潮正大面积袭来,一场关乎千万人命运的大震荡开始了。不论是轻资产行业、重资产行业、文化类企业,还是制造业企业,通通都受不了这轮资本大退潮。唯一还在坚挺的,似乎只有房价了。那么,按照这个剧本和节奏,令所有人都担惊受怕的房地产可能还没来得及崩盘,实体经济就已经率先崩盘了。哥,我先崩了,你随意。如果要用一个词来形容2018年的房地产,那就是“用时间换空间”。直白一点说,就是尽量拖。这也是所有出台政策的最终指向。那么,我们是否有足够的时间来化解这个体量庞大的超级泡沫呢?是否会有外力的因素突然刺破这个泡沫呢?远在大洋另一边的特朗普,笑而不语。作为全世界唯一享受“睡后收入”的领导人,不走寻常路的特朗普可以说是颠覆了所有人的三观,在中国外交的国际政治人物模型里,根本没有这一款型号能够与之对应。特朗普甚至改变了美国驴象双方的传统:历史上向来是民主dang的总统负责搞经济赚钱,共和dang的总统则经常搞事情到处打仗花钱,然后没钱了之后继续让民主dang上台赚钱。5月10日,美国国会预算办公室表示,美国4月创下史上最大月度预算盈余2140亿美元,创1968年有纪录以来新高。同时,失业率也自2000年以来首次下降至4%以下。因此,特朗普的民调支持率连连上升也就并不意外了。由此引申出的一个问题是:自从特朗普宣布和中国的贸易战那一刻开始,那些被我们当做筹码的大豆、汽车、票仓之类的问题,根本就不是特朗普考虑的核心问题,也就是说,特朗普所操作的,是一盘更大更深远的棋局。而我们对这位整天发推特、到处拉仇恨的总统,又了解多少呢?颠覆传统的特朗普,可以说是超出了所有人的经验。伟大领袖曾告诫我们:经验主义害死人啊。在这次贸易战中,为什么中国以往买买买的经验模式失效了呢?因为美国发出了一个非常严肃信号:reciprocal。中国的翻译是“互惠”,美国的本意是“对等”。为了不让中国会错意,特朗普还特意说出了mirror(镜像)这个词,可谓用心良苦。在第一回合的谈判中,美国直接提出了开放金融服务业市场准入这样触及底线的要求。为什么呢?因为他知道你不会答应。为什么一水鹰派的美国代表团敢于提出这种要求呢?因为他知道你的软肋(远远)不止中兴一个。事实上,美国提出的要求是在关税、市场准入、知识产权等问题上的全面对等。其中,一个有趣的要求是,美国还提出了关于互联网全面开放的要求,这对于刚刚经历过审核风波的本篇答案来说,真的是极具喜剧效果。现实的来说,如果没有美国高科技和知识产权的交流与帮助,中国的产业升级和2025规划是很难实现的。那么,中国的筹码是什么呢?抛开无足轻重的不说(中国出口的绝大部分产品都有替代品,而且损失并非不可接受),我们最重要的核心筹码其实是我们自己:中国将是未来全球最大的消费升级市场。这个市场容量是美国无法割舍的。然而,最令人痛心的是,这个消费升级的市场却面临着一个刻不容缓的难题:房地产。一边是内需消费极度萎缩,急需化解房地产的超级泡沫;一边是用时间换空间,尽量拖延房地产泡沫的破灭。对这个核心问题的时机选择和把握,特朗普可谓是稳准狠;估计连美国人自己都没想到,很多中国的热血青年竟然将房价下跌的愿望寄托在一位美国总统身上。就在中国代表团准备启程去美国进行第二回合谈判之前,周五晚上的证监会却发出了暂停IPO的特殊信号,它所蕴含的深意不仅仅是对于A股连跌之后的安慰,也是对于下周第二回合谈判的一个对冲和预防。预防什么呢?预防可能(再次)谈崩了。在进一步讨论之前,我们可以看看人们对于目前高房价和地产泡沫的心态是什么。对于大部分人来说,这种心态是很复杂的,但是从整体上看,最占据上风的心态是侥幸心理。正是这种侥幸心理,让许多人都认为:房地产泡沫都被叫喊着崩溃十几次了,不还是依然在延续么?这种心理和吃了第十个包之后觉得前面九个包都是多余的心理一样,忽视了累计效应的存在。都已经涨了二十年了,也不差再涨这几年了对吧?事实上,在所有金融交易市场中,对于显而易见的风险却视而不见,往往才是崩溃前的最后一个阶段。另外一种心态是麻木。尤其集中在不断被割韭菜的年轻人身上,而他们本应该是中国2025规划中的主力军、本应该是我们在贸易谈判中最重要的底气。然而,现实是什么呢?给大家讲个笑话:教室里突然飞进来一只蜜蜂,于是老师说到,同学们考验大家的时候来了,看看你们是不是祖国的花朵。然而话音未落,蜜蜂就飞出了窗外,老师大惑不解,这时同学们齐声说到:老师,我们是祖国的韭菜。房地产的超级泡沫已经造成了严重的资源错配,掏空了产业升级和高科技发展的根基。如果连00后都变成了韭菜,那么谁还能有资格谈论什么未来呢?你的所有人生规划,都要等到买房之后,所以其实你的人生毫无规划。刚刚发布的的金融数据显示,4月居民存款大降1.32万亿元,为历史单月最大降幅。同时,市场信贷需求依然旺盛,其中新增贷款1.18万亿元、社融增长1.56万亿元,信贷增加远超居民存款增加。这还是发生在央行降准100个基点之后的数据,可见降准没有改变银行资金短缺现象,说明降准释放的资金并没有进入到银行存款。那么,钱都去哪了呢?还是房地产。这就是我们在即将进行第二回合谈判时所面临的现实:一方面,大量资金在房地产(投机)领域空转,导致天量资金沉积于钢筋水泥之中,作为资金池的房地产已经演变成了资金黑洞,央行释放再多的资金也没有用,形成了货币钝化效应。另一方面,中兴已经停摆,国内产品已全部下架,日本市场的订货无法交付。这么明显的事实连我们自己都看的出来,难道特朗普看不出来?在中国,悲剧有两种,一种是没钱,一种是没房。如果两种都没有呢?是不是社会就会抛弃你了呢?别搞笑了,社会连你是谁都不知道。这不是玩笑段子、这就是赤果果的现实。有人说在一线城市没有什么事是一套房搞不定的,如果有,那就两套。还有人说世上99%的事情都可以用钱解决,那么剩下的1%呢?需要更多的钱。作为货币超发资金池的房地产,就是这个1%。过去十年,美国的货币发行量增加了2倍,而中国的货币发行量增加了20倍。如果没有房地产这个资金池,我们的通货膨胀会不堪设想。马云说,8年之后房子如葱。这个预言全国人民都不敢相信,What?8年之后买根葱会和买套房一样贵?如果泡沫破灭,天量超发货币将导致严重的通货膨胀,马云的预言就会变成另一个版本的残酷现实。从静态经济理论的观点来看,产业升级是化解这场危机的必经之路,这一过程虽然痛苦但却无法避免,这也是kbc6n提出房住不炒的最终目的,可以说,这一愿景是理智和美好的。只有壮士断腕般的舍弃房地产经济,才有可能重生。就像上帝给你关上一道门的时候,一定会为你打开一扇窗。然而,为什么我们现在还没有越过这扇窗呢?因为不断超发的天量货币已经让中国经济成为了一个体量庞大的虚胖儿,卡住了。美国发动贸易战的攻击目标,正是这个胖子。如果说第一回合中美国使用的常规武器是中兴事件的话,那么在第二回合中,美国已经开始动用战略武器了。这个武器是什么呢?是石油。5月8日,特朗普在白宫宣布美国将退出伊朗核协议,并对伊朗实施严厉制裁。聪明的你一定看出了这其中的奥秘。是的,伊朗正是中兴事件的发源地,而这一次,特朗普更是直接提出了威胁:任何与伊朗有商业往来的银行和企业未能在规定期限内(3~6个月)解除与伊朗的业务关联,都将遭到美国制裁。很快,法国石油巨头道达尔就提出了或将撤离伊朗南帕尔斯天然气田项目,而这一总价约48亿欧元的项目是欧洲财团对伊朗最大规模的投资。这也说明了欧洲既无决心、也没有气力去阻止美国制裁伊朗。那么,欧洲人退出之后,谁将接手这个项目呢?中石油。中国是伊朗原油最大的买家,伊朗是中国第六大石油供应国,其石油出口的三分之一卖给了中国。和中石油相比,中兴对伊朗出口的那点产品只能说是九牛一毛。一名伊朗石油部门高级政府官员称,一旦接管南帕尔斯气田的运营,中石油将通过旗下的昆仑银行来进行融资和结算。这对于中美贸易战来说意味着什么呢?意味着直接对抗。最开心的是谁呢?是俄罗斯。最近美国的火力太猛,俄罗斯已经有点吃不消了,因此迫切需要将中国捆绑进来。俄罗斯国际事务委员会主席甚至表示,中国将继续购买伊朗石油,而美国不会将中国公司列入对伊制裁名单。然而,依然处于停摆中的中兴提醒着我们,这样的制裁并不是闹着玩的。事实上,在上一轮对伊制裁中,中石油就曾在油田开发的设备采购和运输上出现困难,在油田发电机、压缩机等关键精密仪器、以及物资的生产技术都被欧美所控制,这次一旦美国实施进一步的制裁,后果严重性将远超中兴事件。同时,针对欧美市场出口的一大批中国设备制造商也将受到牵连。这次的贸易战会将我们引向何处现在还无法定论,但是有一点是肯定的:它会对我们现有的经济结构产生深远的影响,正如当初中国加入WTO之后带来的巨大变化一样。新闻里经常说改革开放带来了我们今天的经济成就,改革不敢说(怕被审核),但这个开放,主要指的就是加入WTO,它让中国从一个GDP低下的纯人口大国进入到了世界最先进的经济组织俱乐部,并充分享受到了经济全球化的发展快车道。这才是我们从曾经的无足轻重变成今天的世界第二大经济体的根本原因,道理其实很简单:如果不和世界上最有钱的国家做生意,怎么会变得有钱?就连曾经被苏联忽悠到天上去的越南,也早就想明白了这个问题,改革开放的力度比我们还要迅猛(尤其是镇纸制度改革)。事实上,从中国撤资搬迁的许多外企都去了越南,甚至现在国际上已经发出了越南经济崛起的声音。那么,为什么这次的贸易战不能通过WTO来解决呢?有人说是因为美国不尊重中国,也有人说因为中国有许多加入世贸时的承诺都没有兑现。似乎都说的有道理,那么,为什么美国要采用这种方式呢?因为美国不认可中国是市场经济,欧盟也同样不认可中国是市场经济。这就是为什么特朗普说的我们没有和中国打贸易战,因为这场战争早就被之前的一帮蠢货给输掉了。也就是说,特朗普认为美国一直在中美贸易中吃亏。新闻中都在强调这次贸易战的核心是美国遏制中国2025,其实,特朗普想要的比这要多得多,而且决心异常坚决。具有讽刺意味的是,这次贸易战竟然还以一种特别的方式达到了中国去年一直高喊的口号:去产能。只不过,我们采用的方式是涨价去产能,而这次贸易战却是真正市场化的去产能。贸易战、加息、缩表、美元升值,这些无法避免的外部因素都在剧烈扰动着房地产这个大胖子。5月11日晚,央行发布《2018年第一季度中国货币政策执行报告》,其中,央行首次官方提出“宏观杠杆率趋稳”的判断。这意味着什么呢?意味着风向在变。从“去杠杆”到“稳杠杆”,一字之差,天壤之别。如果说去杠杆代表了货币政策紧缩,那么稳杠杆则意味着要小心翼翼的放水了,与之相比,之前的降准只能算是洒洒水而已。然而,房地产这个蓄水池早已蓄满了水,再放水就有溃堤的风险。那么,哪些地方还能蓄点水呢?似乎只有内需了,这意味着消费该涨价了,也就是新闻中说的扩大内需。这就基本上属于通胀的节奏了。与此叠加的,是输入性通胀,来自哪里呢?正是来自于上面所提到的特朗普石油战略:通过制裁伊朗,推高石油价格。大家不要认为油价升高只会影响开车加油的费用,石油是工业生产和食品加工等行业的重要原料,由此它会带动许许多多相关商品的涨价,这种涨价最终会反映在商品的零售价格和生活必需品上,消费者会因此多付出很多成本,这就是输入性通货膨胀。2007年,原油价格最高涨到了每桶147美元,结果就是造成了国内严重的通货膨胀,物价大幅上涨,严重影响了居民生活水平和国内经济的发展;为了对冲这一波通货膨胀,当时央行动用了6次加息和连续上调存款准备金的严厉方式。然而,也正是央行这一波操作对房地产市场也产生了立竿见影的效果:当时的房地产行业大面积停摆,房价直线向下,像上海当时的房价连续十几个月下跌,最高跌幅高达35%,新闻上基本上都用崩盘来形容当时的市场。通过这个依然历历在目的历史回顾,我们可以看出两点:一是降房价并非传说中的绝不可能,而是要看是否真心愿意去降。二是原油价格上涨的输入性通胀具有极强的杀伤力,如果这次特朗普的伊朗战略使油价上涨到每桶80美元以上,那么到时候就不止是房价下跌的问题了,童鞋们可能要先准备点别的了。 TOP Posted:2018-05-18 12:10 | 回樓主 陈鱼塘 級別: 俠客 ( 9 ) 發帖: 981 威望: 100 點 金錢: 990 USD 貢獻: 0 點 註冊: 2017-09-10 資料 短信 引用 推薦 編輯 2018世界杯官方合作伙伴_专业赌球!