食指长痣代表什么| 一月五日是什么星座| 身上长红疙瘩很痒是什么原因| 木薯淀粉是什么做的| 逆流而上是什么意思| 黄芪丹参山楂泡水有什么功效| 乜贴是什么意思| 冰爽丝是什么面料| 脚抽筋是什么原因引起的| ur是什么意思| 腋毛有什么作用| 南方是什么生肖| lin是什么意思| 脾胃不好吃什么调理| 烧烤烤什么好吃| 甲状腺结节忌口什么| 中暑了吃什么药| 感知力是什么意思| 抚琴是什么意思| 小葱拌豆腐的下一句是什么| 西铁城手表属于什么档次| 什么是处方药| 阴道出血是什么原因| 什么动作可以提高性功能| 心脏造影是什么检查| 瑜伽垫什么材质的好| 脑白质病变吃什么药| 什么叫切片| 吃头孢不能吃什么| 建制派是什么意思| 五行什么生木| 身上起红疙瘩是什么原因| 为什么会甲亢| 成人晚上磨牙是什么原因| 沙雕是什么意思| g是什么计量单位| 脂蛋白磷脂酶a2高说明什么| 什么粥最养胃健脾| 包虫病是什么症状| 为什么肚子上会长毛| 诞生是什么意思| 雷人是什么意思| 省政府秘书长什么级别| 子宫肥大有什么危害| 什么叫执行力| 急性肠胃炎可以吃什么水果| 吴亦凡演过什么电影| cancer是什么意思| 谦虚什么意思| 心脏病挂什么科| 乙亥五行属什么| 肚子突然变大是什么原因| 牙疼吃什么好| 膀胱尿潴留是什么意思| 云南的特产是什么| 叶酸什么时间段吃最好| 座驾是什么意思| 中国劲酒有什么功效| 糖耐筛查主要检查什么| 为什么会有血管瘤| 龟头发炎用什么药| 女人得性疾病什么症状| 狼烟是什么意思| 电信诈骗是什么意思| 行气是什么意思| 机位是什么意思| 地贫有什么症状| 嘴唇有黑斑是什么病| 中暑吃什么药好| 怀字五行属什么| 花胶是什么鱼的鱼肚| 金渐层是什么品种| 什么的豆角| 什么叫认知能力| 为什么近视| 丙肝是什么| 梦到自己头发白了是什么意思| gson是什么牌子| 18年属什么生肖| 胃炎不能吃什么| 男士戴什么手串好| 月经期间洗澡会有什么影响吗| 体检挂什么科| 黑糖和红糖有什么区别| 保底工资是什么意思| 女人大腿内侧黑是什么原因引起的| 夏季有什么花| 什么人不适合艾灸| 什么可以| 生辰八字是指什么| 海盐是什么盐| 什么人容易高反| 茯苓什么味道| 心脏跳快吃什么药好| 头眩晕吃什么药| 农历10月14日是什么星座| 现在开什么实体店赚钱| 血瘀是什么原因造成的| 金字旁加全字念什么| 六月十五号是什么星座| 梦见打狼是什么预兆| 牙痛什么原因引起的| 血压高有什么表现| 三星堆是什么意思| cr什么意思| 慢悠悠的近义词是什么| 双侧输尿管不扩张是什么意思| 西洋参什么时候吃效果最好| 送病人什么礼物好| 突厥是现在的什么地方| 人活着是为了什么| 微波炉加热用什么容器| 僧侣是什么意思| 怀孕孕酮低有什么影响| 什么是福报| 绿豆的功效与作用是什么| 淀粉酶是查什么的| 雷是什么生肖| 十二年义务教育什么时候开始| 结核抗体阳性说明什么| 凤凰是什么生肖| 为什么不建议小孩吃罗红霉素| 肌张力障碍是什么病| 褥疮用什么药膏最好| ro什么意思| 窒息是什么意思| 女人吃桃子有什么好处| 奇异果和猕猴桃有什么区别| 温州人为什么会做生意| 男性小便出血是什么原因| 有机什么意思| 59岁生日有什么讲究| 手麻是什么原因| 奇货可居什么意思| 10个月的宝宝吃什么辅食最好| 酸菜鱼放什么配菜好吃| 安睡裤是干什么用的| 当医生需要什么条件| 双性人什么意思| 先父遗传是什么意思| 半什么半什么| 珍贵的动物是什么生肖| 仓鼠能吃什么水果| 中秋节适合吃什么菜| 什么是佛教什么是道教| 莳字五行属什么| 徒孙是什么意思| dx什么意思| 水星为什么叫水星| 肠系膜淋巴结炎吃什么药最有效| 姨妈期可以吃什么水果| 得了甲亢都有什么症状| 明星经纪人是干什么的| 谷丙转氨酶偏低是什么意思| 连城诀为什么不火| tp是什么意思| 去痘印用什么药膏好| 拿什么爱你| 杏色配什么颜色好看| 18岁属什么生肖| 点痦子去医院挂什么科| hpv阴性是什么意思| 太平天国为什么会失败| ol是什么意思| panerai是什么牌子| 梦到一个人意味着什么| 上火了吃什么药| 人的本性是什么| MP是什么| 正月是什么意思| 心脏不舒服有什么症状| 双一流大学是什么意思| 教师节唱什么歌| 梦见剪头发预示什么| 为什么喜欢秋天| 黄帝叫什么名字| 葛根和什么搭配泡水好| 酸西地那非片是什么药| 高烧拉肚子是什么原因| 什么是极光| 轻歌曼舞是什么意思| 猫喜欢吃什么| 今天属什么生肖日历| 什么是肺结核| 腿抽筋吃什么药最好| 什么是拓扑| lhrh是什么激素| 粳米是什么米| 方解石玉是什么玉| 鸟屎掉衣服上有什么预兆| 儿童内热吃什么去内热| 牙齿涂氟是什么意思| 缅铃是什么| 属马的生什么属相的宝宝好| 女人什么时候最想男人| 肠息肉是什么| 省长是什么级别| 赤脚医生是什么意思| 什么什么望外| 疱疹用什么药膏最有效| 不苟言笑的苟是什么意思| 细菌性痢疾吃什么药| 倭瓜是什么意思| 头皮脂溢性皮炎用什么洗发水| salomon是什么牌子| pubg是什么意思| 三唑酮主治什么病害| 突然便秘是什么原因引起的| 尿液检查红细胞高是什么原因| raf是什么意思| 子宫糜烂用什么药| 孕期血糖高可以吃什么水果| 碧玺是什么| 红鸾星动是什么意思| 稀料对人体有什么危害| 撇嘴是什么意思| 清蒸什么鱼好吃| 子宫在肚脐眼什么位置| 猪沙肝是什么部位| 呲牙咧嘴是什么意思| 为人是什么意思| 肩袖损伤吃什么药| 麦麸是什么| 点了痣要注意什么| 男鼠配什么生肖最好| 羊水指数和羊水深度有什么区别| 卧轨什么意思| 头疼恶心想吐吃什么药| 金银花不能和什么一起吃| 肠炎吃什么药效果最好| 薛之谦为什么离婚| 手背出汗是什么原因| 舌苔发黑是什么病的前兆| 氟西汀是什么药| 瞧不起是什么意思| 泊字五行属什么| cll是什么意思| 月经期间适合做什么运动| 肺炎吃什么药| 格格是什么身份| 红细胞压积偏高是什么原因| pdr是什么意思| 说笑了是什么意思| 十月初八是什么星座| 口干口臭什么原因引起的| 维生素b9是什么| 转移什么意思| 阴囊潮湿是什么原因| crp是什么检查| 什么样的包皮需要做手术| 保质期是什么意思| 银杏叶片有什么作用| 天高云淡是什么季节| 卵黄囊偏大是什么原因| 龛影是什么意思| 三月29号是什么星座| skp是什么品牌| 结节是什么病| 为什么会长火疖子| 感统失调是什么意思| 口什么心什么| 雪莲果什么时候成熟| 三百年前是什么朝代| 梦见冬瓜是什么意思| 百度

江西省交通建设工程质量与安全生产监督管理条例

百度 党的十九大报告提出,要深化国有企业改革,发展混合所有制经济,培育具有全球竞争力的世界一流企业。

In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would expect to get in reality.

The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration.

The expected value of a random variable X is often denoted by E(X), E[X], or EX, with E also often stylized as or E.[1][2][3]

History

edit

The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished.[4] This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem could not be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all.

He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[5]

In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo ale?" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability.

In the foreword to his treatise, Huygens wrote:

It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs.

—?Edwards (2002)

In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables.[6]

Etymology

edit

Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes:[7]

That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2.

More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:[8]

... this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.

Notations

edit

The use of the letter E to denote "expected value" goes back to W. A. Whitworth in 1901.[9] The symbol has since become popular for English writers. In German, E stands for Erwartungswert, in Spanish for esperanza matemática, and in French for espérance mathématique.[10]

When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized as E (upright), E (italic), or ? (in blackboard bold), while a variety of bracket notations (such as E(X), E[X], and EX) are all used.

Another popular notation is μX. ?X?, ?X?av, and ? are commonly used in physics.[11] M(X) is used in Russian-language literature.

Definition

edit

As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language.

Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector X. It is defined component by component, as E[X]i = E[Xi]. Similarly, one may define the expected value of a random matrix X with components Xij by E[X]ij = E[Xij].

Random variables with finitely many outcomes

edit

Consider a random variable X with a finite list x1, ..., xk of possible outcomes, each of which (respectively) has probability p1, ..., pk of occurring. The expectation of X is defined as[12] ?

Since the probabilities must satisfy p1 + ??? + pk = 1, it is natural to interpret E[X] as a weighted average of the xi values, with weights given by their probabilities pi.

In the special case that all possible outcomes are equiprobable (that is, p1 = ??? = pk), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others.

Examples

edit
?
An illustration of the convergence of sequence averages of rolls of a dice to the expected value of 3.5 as the number of rolls (trials) grows
  • Let ? represent the outcome of a roll of a fair six-sided die. More specifically, ? will be the number of pips showing on the top face of the die after the toss. The possible values for ? are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of ?1/6?. The expectation of ? is ? If one rolls the die ? times and computes the average (arithmetic mean) of the results, then as ? grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers.
  • The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable ? represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability ?1/38? in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be ? That is, the expected value to be won from a $1 bet is ?$?1/19?. Thus, in 190 bets, the net loss will probably be about $10.

Random variables with countably infinitely many outcomes

edit

Informally, the expectation of a random variable with a countably infinite set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that ? where x1, x2, ... are the possible outcomes of the random variable X and p1, p2, ... are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context.[13]

However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely.

For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands.[14] In the alternative case that the infinite sum does not converge absolutely, one says the random variable does not have finite expectation.[14]

Examples

edit
  • Suppose ? and ? for ? where ? is the scaling factor which makes the probabilities sum to 1. Then we have ?

Random variables with density

edit

Now consider a random variable X which has a probability density function given by a function f on the real number line. This means that the probability of X taking on a value in any given open interval is given by the integral of f over that interval. The expectation of X is then given by the integral[15] ? A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting.[16] For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors.

Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of X is given by the Cauchy distribution Cauchy(0, π), so that f(x) = (x2 + π2)?1. It is straightforward to compute in this case that ? The limit of this expression as a → ?∞ and b → ∞ does not exist: if the limits are taken so that a = ?b, then the limit is zero, while if the constraint 2a = ?b is taken, then the limit is ln(2).

To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with E[X] left undefined otherwise.[17] However, measure-theoretic notions as given below can be used to give a systematic definition of E[X] for more general random variables X.

Arbitrary real-valued random variables

edit

All definitions of the expected value may be expressed in the language of measure theory. In general, if X is a real-valued random variable defined on a probability space (Ω, Σ, P), then the expected value of X, denoted by E[X], is defined as the Lebesgue integral[18] ? Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of X is defined via weighted averages of approximations of X which take on finitely many values.[19] Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable X is said to be absolutely continuous if any of the following conditions are satisfied:

  • there is a nonnegative measurable function f on the real line such that ? for any Borel set A, in which the integral is Lebesgue.
  • the cumulative distribution function of X is absolutely continuous.
  • for any Borel set A of real numbers with Lebesgue measure equal to zero, the probability of X being valued in A is also equal to zero
  • for any positive number ε there is a positive number δ such that: if A is a Borel set with Lebesgue measure less than δ, then the probability of X being valued in A is less than ε.

These conditions are all equivalent, although this is nontrivial to establish.[20] In this definition, f is called the probability density function of X (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration,[21] combined with the law of the unconscious statistician,[22] it follows that ? for any absolutely continuous random variable X. The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable.

?
Expected value μ and median ??

The expected value of any real-valued random variable ? can also be defined on the graph of its cumulative distribution function ? by a nearby equality of areas. In fact, ? with a real number ? if and only if the two surfaces in the ?-?-plane, described by ? respectively, have the same finite area, i.e. if ? and both improper Riemann integrals converge. Finally, this is equivalent to the representation ? also with convergent integrals.[23]

Infinite expected values

edit

Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of ±∞. This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes xi = 2i, with associated probabilities pi = 2?i, for i ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has ? It is natural to say that the expected value equals +∞.

There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral.[19] The first fundamental observation is that, whichever of the above definitions are followed, any nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as +∞. The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable X, one defines the positive and negative parts by X + = max(X, 0) and X ? = ?min(X, 0). These are nonnegative random variables, and it can be directly checked that X = X + ? X ?. Since E[X +] and E[X ?] are both then defined as either nonnegative numbers or +∞, it is then natural to define: ?

According to this definition, E[X] exists and is finite if and only if E[X +] and E[X ?] are both finite. Due to the formula |X| = X + + X ?, this is the case if and only if E|X| is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations.

  • In the case of the St. Petersburg paradox, one has X ? = 0 and so E[X] = +∞ as desired.
  • Suppose the random variable X takes values 1, ?2,3, ?4, ... with respective probabilities ?2, 6(2π)?2, 6(3π)?2, 6(4π)?2, .... Then it follows that X + takes value 2k?1 with probability 6((2k?1)π)?2 for each positive integer k, and takes value 0 with remaining probability. Similarly, X ? takes value 2k with probability 6(2kπ)?2 for each positive integer k and takes value 0 with remaining probability. Using the definition for non-negative random variables, one can show that both E[X +] = ∞ and E[X ?] = ∞ (see Harmonic series). Hence, in this case the expectation of X is undefined.
  • Similarly, the Cauchy distribution, as discussed above, has undefined expectation.

Expected values of common distributions

edit

The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references.

Distribution Notation Mean E(X)
Bernoulli[24] ? ?
Binomial[25] ? ?
Poisson[26] ? ?
Geometric[27] ? ?
Uniform[28] ? ?
Exponential[29] ? ?
Normal[30] ? ?
Standard Normal[31] ? ?
Pareto[32] ? ?
Cauchy[33] ? ? is undefined

Properties

edit

The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like ? is true almost surely, when the probability measure attributes zero-mass to the complementary event ?

  • Non-negativity: If ? (a.s.), then ?
  • Linearity of expectation:[34] The expected value operator (or expectation operator) ? is linear in the sense that, for any random variables ? and ? and a constant ? ? whenever the right-hand side is well-defined. By induction, this means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant. Symbolically, for ? random variables ? and constants ? we have ? If we think of the set of random variables with finite expected value as forming a vector space, then the linearity of expectation implies that the expected value is a linear form on this vector space.
  • Monotonicity: If ? (a.s.), and both ? and ? exist, then ?
    Proof follows from the linearity and the non-negativity property for ? since ? (a.s.).
  • Non-degeneracy: If ? then ? (a.s.).
  • If ? (a.s.), then ? In other words, if X and Y are random variables that take different values with probability zero, then the expectation of X will equal the expectation of Y.
  • If ? (a.s.) for some real number c, then ? In particular, for a random variable ? with well-defined expectation, ? A well defined expectation implies that there is one number, or rather, one constant that defines the expected value. Thus follows that the expectation of this constant is just the original expected value.
  • As a consequence of the formula |X| = X+ + X? as discussed above, together with the triangle inequality, it follows that for any random variable ? with well-defined expectation, one has ?
  • Let 1A denote the indicator function of an event A, then E[1A] is given by the probability of A. This is nothing but a different way of stating the expectation of a Bernoulli random variable, as calculated in the table above.
  • Formulas in terms of CDF: If ? is the cumulative distribution function of a random variable X, then ? where the values on both sides are well defined or not well defined simultaneously, and the integral is taken in the sense of Lebesgue-Stieltjes. As a consequence of integration by parts as applied to this representation of E[X], it can be proved that ? with the integrals taken in the sense of Lebesgue.[35] As a special case, for any random variable X valued in the nonnegative integers {0, 1, 2, 3, ...}, one has ? where P denotes the underlying probability measure.
  • Non-multiplicativity: In general, the expected value is not multiplicative, i.e. ? is not necessarily equal to ? If ? and ? are independent, then one can show that ? If the random variables are dependent, then generally ? although in special cases of dependency the equality may hold.
  • Law of the unconscious statistician: The expected value of a measurable function of ? ? given that ? has a probability density function ? is given by the inner product of ? and ?:[34] ? This formula also holds in multidimensional case, when ? is a function of several random variables, and ? is their joint density.[34][36]

Inequalities

edit

Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a nonnegative random variable X and any positive number a, it states that[37] ?

If X is any random variable with finite expectation, then Markov's inequality may be applied to the random variable |X?E[X]|2 to obtain Chebyshev's inequality ? where Var is the variance.[37] These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%.[38] The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables.[39]

The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory.

  • Jensen's inequality: Let f: RR be a convex function and X a random variable with finite expectation. Then[40] ? Part of the assertion is that the negative part of f(X) has finite expectation, so that the right-hand side is well-defined (possibly infinite). Convexity of f can be phrased as saying that the output of the weighted average of two inputs under-estimates the same weighted average of the two outputs; Jensen's inequality extends this to the setting of completely general weighted averages, as represented by the expectation. In the special case that f(x) = |x|t/s for positive numbers s < t, one obtains the Lyapunov inequality[41] ? This can also be proved by the H?lder inequality.[40] In measure theory, this is particularly notable for proving the inclusion Ls ? Lt of Lp spaces, in the special case of probability spaces.
  • H?lder's inequality: if p > 1 and q > 1 are numbers satisfying p ?1 + q ?1 = 1, then ? for any random variables X and Y.[40] The special case of p = q = 2 is called the Cauchy–Schwarz inequality, and is particularly well-known.[40]
  • Minkowski inequality: given any number p ≥ 1, for any random variables X and Y with E|X|p and E|Y|p both finite, it follows that E|X + Y|p is also finite and[42] ?

The H?lder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces.

Expectations under convergence of random variables

edit

In general, it is not the case that ? even if ? pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let ? be a random variable distributed uniformly on ? For ? define a sequence of random variables ? with ? being the indicator function of the event ? Then, it follows that ? pointwise. But, ? for each ? Hence, ?

Analogously, for general sequence of random variables ? the expected value operator is not ?-additive, i.e. ?

An example is easily obtained by setting ? and ? for ? where ? is as in the previous example.

A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below.

  • Monotone convergence theorem: Let ? be a sequence of random variables, with ? (a.s) for each ? Furthermore, let ? pointwise. Then, the monotone convergence theorem states that ?
    Using the monotone convergence theorem, one can show that expectation indeed satisfies countable additivity for non-negative random variables. In particular, let ? be non-negative random variables. It follows from the monotone convergence theorem that ?
  • Fatou's lemma: Let ? be a sequence of non-negative random variables. Fatou's lemma states that ?
    Corollary. Let ? with ? for all ? If ? (a.s), then ?
    Proof is by observing that ? (a.s.) and applying Fatou's lemma.
  • Dominated convergence theorem: Let ? be a sequence of random variables. If ? pointwise (a.s.), ? (a.s.), and ? Then, according to the dominated convergence theorem,
    • ?;
    • ?
    • ?
  • Uniform integrability: In some cases, the equality ? holds when the sequence ? is uniformly integrable.

Relationship with characteristic function

edit

The probability density function ? of a scalar random variable ? is related to its characteristic function ? by the inversion formula: ?

For the expected value of ? (where ? is a Borel function), we can use this inversion formula to obtain ?

If ? is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem, ? where ? is the Fourier transform of ? The expression for ? also follows directly from the Plancherel theorem.

Uses and applications

edit

The expectation of a random variable plays an important role in a variety of contexts.

In statistics, where one seeks estimates for unknown parameters based on available data gained from samples, the sample mean serves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in being unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter.

For a different example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function.

It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.

The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X ? E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.

To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.

This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. ? where ? is the indicator function of the set ?

?
The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).

In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].

Expected values can also be used to compute the variance, by means of the computational formula for the variance ?

A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator ? operating on a quantum state vector ? is written as ? The uncertainty in ? can be calculated by the formula ?.

See also

edit

References

edit
  1. ^ "Expectation | Mean | Average". www.probabilitycourse.com. Retrieved 2025-08-14.
  2. ^ Hansen, Bruce. "PROBABILITY AND STATISTICS FOR ECONOMISTS" (PDF). Archived from the original (PDF) on 2025-08-14. Retrieved 2025-08-14.
  3. ^ Wasserman, Larry (December 2010). All of Statistics: a concise course in statistical inference. Springer texts in statistics. p.?47. ISBN?9781441923226.
  4. ^ History of Probability and Statistics and Their Applications before 1750. Wiley Series in Probability and Statistics. 1990. doi:10.1002/0471725161. ISBN?9780471725169.
  5. ^ Ore, Oystein (1960). "Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly. 67 (5): 409–419. doi:10.2307/2309286. JSTOR?2309286.
  6. ^ George Mackey (July 1980). "HARMONIC ANALYSIS AS THE EXPLOITATION OF SYMMETRY - A HISTORICAL SURVEY". Bulletin of the American Mathematical Society. New Series. 3 (1): 549.
  7. ^ Huygens, Christian. "The Value of Chances in Games of Fortune. English Translation" (PDF).
  8. ^ Laplace, Pierre Simon, marquis de, 1749-1827. (1952) [1951]. A philosophical essay on probabilities. Dover Publications. OCLC?475539.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  9. ^ Whitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.]
  10. ^ "Earliest uses of symbols in probability and statistics".
  11. ^ Feller 1968, p.?221.
  12. ^ Billingsley 1995, p.?76.
  13. ^ Ross 2019, Section 2.4.1.
  14. ^ a b Feller 1968, Section IX.2.
  15. ^ Papoulis & Pillai 2002, Section 5-3; Ross 2019, Section 2.4.2.
  16. ^ Feller 1971, Section I.2.
  17. ^ Feller 1971, p.?5.
  18. ^ Billingsley 1995, p.?273.
  19. ^ a b Billingsley 1995, Section 15.
  20. ^ Billingsley 1995, Theorems 31.7 and 31.8 and p. 422.
  21. ^ Billingsley 1995, Theorem 16.13.
  22. ^ Billingsley 1995, Theorem 16.11.
  23. ^ Uhl, Roland (2023). Charakterisierung des Erwartungswertes am Graphen der Verteilungsfunktion [Characterization of the expected value on the graph of the cumulative distribution function] (PDF). Technische Hochschule Brandenburg. pp.?2–4. doi:10.25933/opus4-2986. Archived from the original on 2025-08-14.
  24. ^ Casella & Berger 2001, p.?89; Ross 2019, Example 2.16.
  25. ^ Casella & Berger 2001, Example 2.2.3; Ross 2019, Example 2.17.
  26. ^ Billingsley 1995, Example 21.4; Casella & Berger 2001, p.?92; Ross 2019, Example 2.19.
  27. ^ Casella & Berger 2001, p.?97; Ross 2019, Example 2.18.
  28. ^ Casella & Berger 2001, p.?99; Ross 2019, Example 2.20.
  29. ^ Billingsley 1995, Example 21.3; Casella & Berger 2001, Example 2.2.2; Ross 2019, Example 2.21.
  30. ^ Casella & Berger 2001, p.?103; Ross 2019, Example 2.22.
  31. ^ Billingsley 1995, Example 21.1; Casella & Berger 2001, p.?103.
  32. ^ Johnson, Kotz & Balakrishnan 1994, Chapter 20.
  33. ^ Feller 1971, Section II.4.
  34. ^ a b c Weisstein, Eric W. "Expectation Value". mathworld.wolfram.com. Retrieved 2025-08-14.
  35. ^ Feller 1971, Section V.6.
  36. ^ Papoulis & Pillai 2002, Section 6-4.
  37. ^ a b Feller 1968, Section IX.6; Feller 1971, Section V.7; Papoulis & Pillai 2002, Section 5-4; Ross 2019, Section 2.8.
  38. ^ Feller 1968, Section IX.6.
  39. ^ Feller 1968, Section IX.7.
  40. ^ a b c d Feller 1971, Section V.8.
  41. ^ Billingsley 1995, pp.?81, 277.
  42. ^ Billingsley 1995, Section 19.

Bibliography

edit
阿司匹林肠溶片什么时候吃 梦见染头发是什么意思 宫内妊娠是什么意思 素的部首是什么 古筝是什么乐器
芒种是什么时候 rr医学上什么意思 什么是脑死亡 做梦梦到老婆出轨是什么意思 黄加黑变成什么颜色
香港的别称是什么 诠释的意思是什么 法学是干什么的 高山茶属于什么茶 失信是什么意思
蜂蜜有什么作用与功效 感触什么意思 什么叫书签 紫皮大蒜和白皮大蒜有什么区别 捂脸表情什么意思
小家碧玉是什么生肖hcv9jop1ns5r.cn 脓是什么hcv8jop0ns4r.cn 多出汗有什么好处hcv8jop8ns9r.cn 抹茶是什么茶叶做的hcv7jop5ns6r.cn 下午六点多是什么时辰hcv9jop1ns6r.cn
结节灶是什么意思啊hcv7jop6ns0r.cn 为什么月经前乳房胀痛hcv8jop5ns1r.cn 不明觉厉是什么意思hcv8jop3ns4r.cn apc是什么hcv8jop6ns5r.cn 宝宝睡眠不好是什么原因hcv7jop9ns5r.cn
感知力是什么意思hcv8jop7ns3r.cn 雏形是什么意思hcv9jop0ns9r.cn 做梦梦见钓鱼是什么意思hcv8jop4ns7r.cn 梦见走亲戚是什么意思hcv8jop0ns2r.cn 鸽子和什么炖气血双补hcv9jop5ns0r.cn
为什么脖子上会长痘痘hcv8jop1ns1r.cn 处女座的幸运数字是什么hcv8jop0ns5r.cn 白炽灯是什么灯hcv8jop8ns9r.cn 骨加客读什么hcv8jop9ns9r.cn 流黄鼻涕是什么原因hcv7jop6ns9r.cn
百度