翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

covariance : ウィキペディア英語版
covariance

In probability theory and statistics, covariance is a measure of how much two random variables change together. If the greater values of one variable mainly correspond with the greater values of the other variable, and the same holds for the smaller values, i.e., the variables tend to show similar behavior, the covariance is positive.〔http://mathworld.wolfram.com/Covariance.html〕 In the opposite case, when the greater values of one variable mainly correspond to the smaller values of the other, i.e., the variables tend to show opposite behavior, the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables. The magnitude of the covariance is not easy to interpret. The normalized version of the covariance, the correlation coefficient, however, shows by its magnitude the strength of the linear relation.
A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen as a property of the joint probability distribution, and (2) the sample covariance, which serves as an estimated value of the parameter.
== Definition ==
The covariance between two jointly distributed real-valued random variables ''X'' and ''Y'' with finite second moments is defined as〔Oxford Dictionary of Statistics, Oxford University Press, 2002, p. 104.〕
:
\operatorname(X,Y) = \operatorname,

where E() is the expected value of ''X'', also known as the mean of ''X''. By using the linearity property of expectations, this can be simplified to
:
\begin
\operatorname(X,Y)
&= \operatorname\left\right)\right] \\
&= \operatorname\left Y + \operatorname\left() \operatorname\left()\right] \\
&= \operatorname\left(Y\right ) - \operatorname\left() \operatorname\left() - \operatorname\left() \operatorname\left() + \operatorname\left() \operatorname\left() \\
&= \operatorname\left(Y\right ) - \operatorname\left() \operatorname\left().
\end

However, when \operatorname() \approx \operatorname()\operatorname(), this last equation is prone to catastrophic cancellation when computed with floating point arithmetic and thus should be avoided in computer programs when the data has not been centered before.〔Donald E. Knuth (1998). ''The Art of Computer Programming'', volume 2: ''Seminumerical Algorithms'', 3rd edn., p. 232. Boston: Addison-Wesley.〕 Numerically stable algorithms should be preferred in this case.
For random vectors \mathbf \in \mathbb^m and \mathbf \in \mathbb^n, the ''m×n'' cross covariance matrix (also known as dispersion matrix or variance–covariance matrix,〔W. J. Krzanowski, ''Principles of Multivariate Analysis'', Chap. 7.1, Oxford University Press, New York, 1988〕 or simply called covariance matrix) is equal to
:
\begin
\operatorname(\mathbf,\mathbf)
& = \operatorname
\left)^\mathrm\right]\\
& = \operatorname\left(\mathbf^\mathrm\right ) - \operatorname()\operatorname()^\mathrm,
\end

where mT is the transpose of the vector (or matrix) m.
The (''i'',''j'')-th element of this matrix is equal to the covariance Cov(''Xi'', ''Yj'') between the ''i''-th scalar component of ''X'' and the ''j''-th scalar component of ''Y''. In particular, Cov(''Y'', ''X'') is the transpose of Cov(''X'', ''Y'').
For a vector \mathbf=
\beginX_1 & X_2 & \dots & X_m\end^\mathrm
of ''m'' jointly distributed random variables with finite second moments, its covariance matrix is defined as
: \Sigma(\mathbf) = \sigma(\mathbf,\mathbf) .
Random variables whose covariance is zero are called uncorrelated. Similarly, random vectors whose covariance matrix is zero in every entry outside the main diagonal are called uncorrelated.
The units of measurement of the covariance Cov(''X'', ''Y'') are those of ''X'' times those of ''Y''. By contrast, correlation coefficients, which depend on the covariance, are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)
The covariance of two discrete sets can be equivalently expressed, without directly referring to the mean
: \operatorname(X,Y) = \frac \sum_^n \sum_^n \frac(x_i - x_j)\cdot(y_i - y_j)^\sum_i \sum_ (x_i-x_j)\cdot(y_i - y_j)^{\mathrm{T}}.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「covariance」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.