精神恍惚是什么症状| 鼻子下面长痘痘是什么原因引起的| 奥美拉唑什么时候吃最好| 美尼尔综合征吃什么药| 早上起床眼睛浮肿是什么原因| 青绿色是什么颜色| 蒲公英和什么相克致死| 开飞机需要什么驾照| 汪字五行属什么| 卵巢早衰是什么意思| 邀请的意思是什么| molly是什么意思| 泌乳素是什么| 什么姿势最舒服| 身体出汗多是什么原因| 罗汉果是什么| 例假血发黑是什么原因| 胃酸过多吃什么| cima是什么证书| 吃葵花籽有什么好处和坏处吗| 爷爷的妹妹叫什么| 干扰素是什么药| 头发打结是什么原因| 古惑仔是什么| 什么是乳胶床垫| 老年斑长什么样| 纹眉失败擦什么淡化| 医院手环颜色代表什么| 无水酥油是什么油| 什么是三界五行| 甲胎蛋白是检查什么的| 脑电图能检查出什么疾病| 户口分户需要什么条件| 蜈蚣最怕什么| 梦见孩子哭是什么意思| 属羊是什么命| 女生额头长痘痘是什么原因| 粉色玫瑰花代表什么意思| 放屁多是什么原因引起的| 皮肤暗黄是什么原因造成的| 痛风吃什么水果好| 为什么会长腋毛| 什么东西可以止痒| 频繁大便是什么原因| 喉咙痛吃什么药好| 胆固醇低是什么原因| 世界上最难写的字是什么字| 世界上最贵的烟是什么烟| 海洋里面有什么动物| 办理身份证需要带什么| 痛风吃什么最好| 什么是鸡胸| 总做噩梦是什么原因| 代可可脂是什么| 男生回复嗯嗯代表什么| 吃什么东西降尿酸| 银五行属性是什么| 自卑什么意思| 狐假虎威什么意思| 抖m是什么| 低血糖挂什么科| 脑供血不足头晕吃什么药| 紧迫感是什么意思| 儿童遗尿挂什么科| 京东自营什么意思| 11月5号什么星座| 6月15号是什么星座| 世界上最高的塔是什么塔| 肛门有灼烧感什么原因| 血象是指什么| 眉毛稀少是什么原因| 烫伤用什么药好| 10月7日是什么星座| 嘴巴起水泡是什么原因| 愚公移山是什么意思| 手指尖麻木是什么原因| 固表是什么意思| 2013年是什么命| 梦见自己的手镯断了什么意思| 什么食物养肝护肝最好| 狂风暴雨是什么生肖| sobranie是什么烟| 长白班什么意思| 猫爱吃什么| 净空是什么意思| 此物非彼物是什么意思| 怀孕吃什么会流产| 失聪什么意思| 贝的偏旁有什么字| 乾五行属什么| 小便黄是什么病症| 二月二十二日是什么星座| 自来鸟是什么兆头| 什么是原生家庭| 松果体囊肿是什么病| 二五八万是什么意思| 什么叫点映| 七月十一日是什么日子| 藏红花有什么作用和功效| 25羟维生素d测定是什么| 担心是什么意思| 咳嗽有痰吃什么水果| 缺少雌激素吃什么可以补充| 群众路线是什么| 胃黏膜受损吃什么药| 肠胃炎喝什么药| 月亮象征着什么| 男性尿道疼痛小便刺痛吃什么药| 什么是宫颈息肉| 精液是什么颜色| 骨肉相连是什么肉| 医院什么时候下班| 肌肉萎缩看什么科| 嘈杂纳减是什么意思| 斤加一笔是什么字| 什么是小奶狗| 贞操是什么意思| 清奇是什么意思| pof是什么意思| 火是什么| 沙示汽水有什么功效| 啵啵是什么| 体检前一天不能吃什么| 感冒吃什么饭菜比较好| 紫水晶五行属什么| 尿蛋白高是什么原因| 郑和原名叫什么| 尿糖弱阳性是什么意思| 白细胞酯酶是什么意思| 单活胎是什么意思| 外阴起红点是什么病| 农村做什么生意赚钱| rna是什么意思| 尿道口感染吃什么药| 鸡杂是什么| 葡萄像什么比喻句| 什么是认知行为疗法| 26周岁属什么| 软装是什么| 海底捞是什么| 普外科是什么科| 月经头疼是什么原因| 晒单是什么意思| 开车穿什么鞋最好| 宝宝出牙晚是什么原因| 德国是什么人种| 一什么月牙| 1977年出生是什么命| 每天放很多屁是什么原因| 男人下巴有痣代表什么| 日出东方下一句是什么| 荷叶泡水喝有什么作用| 天灵盖是什么意思| 苗侨伟为什么叫三哥| 什么是bg| 吃什么补肾壮阳| 8.23是什么星座| 惊厥是什么原因引起的| 仓鼠为什么吃自己的孩子| 小便粉红色是什么原因| 34岁属什么的生肖| 日本为什么经常地震| 丝光棉是什么面料| 为什么会得骨癌| 六冲是什么意思| 什么家常菜好吃| 红蜘蛛用什么药最有效| 孤魂野鬼是什么生肖| 脸颊两边长痘痘是什么原因引起的| 小孩出冷汗是什么原因| 鼻炎不能吃什么| 什么的光华| 5月30日是什么星座| 1月27日是什么星座| 心脏五行属什么| 检测hpv挂什么科| 感觉牙齿松动是什么原因| 近视散光是什么意思| 肚脐眼上面痛是什么原因引起的| 你喜欢什么| 结婚送什么礼物最合适| 锋芒是什么意思| 芊芊学子什么意思| 黑头发有什么好处脑筋急转弯| 偶尔头疼是什么原因| 份量是什么意思| 薏米和什么一起煮粥最好| 腰疼是什么原因| 脉弦是什么意思和症状| 天罗地网是什么生肖| 梅菜是什么菜做的| 蝉吃什么食物| 隐翅虫咬了用什么药膏| 胸内科主要看什么病| 眼睛模糊流泪用什么药| 牡丹花什么颜色| 惟妙惟肖什么意思| 偏袒是什么意思| 心烦意乱焦躁不安吃什么药| 天王星代表什么| 兰州有什么特产| 鸡蛋吃多了有什么坏处| 做透析是什么病| 治疗神经痛用什么药最有效| bi是什么| 鼻炎和鼻窦炎有什么区别| 肛裂出血用什么药| 日语八嘎是什么意思| 喘不过气是什么原因| 欠钱不还被起诉会有什么后果| 叶酸片什么时候吃合适| 吃完泡面吃什么解毒| 辩证法是什么| 吃杨梅有什么好处和功效| 摸摸唱是什么意思| 取保候审是什么意思还会判刑吗| 吃什么可以生精最快| 我追呀追呀是什么歌曲| 掉头发缺什么维生素| 冥界是什么意思| 白术适合什么地方种植| 减肥吃什么| 5d电影是什么| ppt什么意思| 抛砖引玉是什么生肖| 脑萎缩是什么原因| 减肥晚上可以吃什么| 日月同辉是什么意思| 安徽有什么土特产| 子宫内膜增生有什么症状| 胃胀吃什么药| 泄泻是什么意思| 幼儿急疹是什么原因引起的| 胃底腺息肉什么意思| 氟西汀是什么药| 泛化是什么意思| 特首是什么意思| 金翅鸟吃什么| 验血肝功能看什么指标| 脑瘤早期什么症状| 减肥期间可以吃什么水果| 梦见相亲是什么意思| 镇宅是什么意思| 517是什么意思| 止吐吃什么药| 原始分是什么意思| 形婚是什么| 卫生间除臭用什么最好| 贡菜是什么菜| mect是什么意思| MC是什么牌子的车| 梦见动物是什么意思| yellow是什么颜色| 罗纹布是什么面料| 一什么千什么| 九月五日是什么节日| 11月29是什么星座| 画画画什么| 鱼鳔是什么| 内讧是什么意思| 胆红素升高是什么原因| 逸事是什么意思| 猪横利是什么| 91年五行属什么| 百度
百度 这一惊悚观点顿时引起舆论哗然,讨论与质疑之声不绝于耳。

A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events.[1] The term 'random variable' in its mathematical definition refers to neither randomness nor variability[2] but instead is a mathematical function in which

  • the domain is the set of possible outcomes in a sample space (e.g. the set which are the possible upper sides of a flipped coin heads or tails as the result from tossing a coin); and
  • the range is a measurable space (e.g. corresponding to the domain above, the range might be the set if say heads mapped to -1 and mapped to 1). Typically, the range of a random variable is a subset of the real numbers.
This graph shows how random variable is a function from all possible outcomes to real values. It also shows how random variable is used for defining probability mass functions.

Informally, randomness typically represents some fundamental element of chance, such as in the roll of a die; it may also represent uncertainty, such as measurement error.[1] However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup.

In the formal mathematical language of measure theory, a random variable is defined as a measurable function from a probability measure space (called the sample space) to a measurable space. This allows consideration of the pushforward measure, which is called the distribution of the random variable; the distribution is thus a probability measure on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be independent.

It is common to consider the special cases of discrete random variables and absolutely continuous random variables, corresponding to whether a random variable is valued in a countable subset or in an interval of real numbers. There are other important possibilities, especially in the theory of stochastic processes, wherein it is natural to consider random sequences or random functions. Sometimes a random variable is taken to be automatically valued in the real numbers, with more general random quantities instead being called random elements.

According to George Mackey, Pafnuty Chebyshev was the first person "to think systematically in terms of random variables".[3]

Definition

edit

A random variable ? is a measurable function ? from a sample space ? as a set of possible outcomes to a measurable space ?. The technical axiomatic definition requires the sample space ? to belong to a probability triple ? (see the measure-theoretic definition). A random variable is often denoted by capital Roman letters such as ?.[4]

The probability that ? takes on a value in a measurable set ? is written as

?.

Standard case

edit

In many cases, ? is real-valued, i.e. ?. In some contexts, the term random element (see extensions) is used to denote a random variable not of this form.

When the image (or range) of ? is finite or countably infinite, the random variable is called a discrete random variable[5]:?399? and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of ?. If the image is uncountably infinite (usually an interval) then ? is called a continuous random variable.[6][7] In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous.[8]

Any random variable can be described by its cumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value.

Extensions

edit

The term "random variable" in statistics is traditionally limited to the real-valued case (?). In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution.

However, the definition above is valid for any measurable space ? of values. Thus one can consider random elements of other sets ?, such as random Boolean values, categorical values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, and functions. One may then specifically refer to a random variable of type ?, or an ?-valued random variable.

This more general concept of a random element is particularly useful in disciplines such as graph theory, machine learning, natural language processing, and other fields in discrete mathematics and computer science, where one is often interested in modeling the random variation of non-numerical data structures. In some cases, it is nonetheless convenient to represent each element of ?, using one or more real numbers. In this case, a random element may optionally be represented as a vector of real-valued random variables (all defined on the same underlying probability space ?, which allows the different random variables to covary). For example:

  • A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector, whose length equals the size of the vocabulary, where the only values of positive probability are ?, ?, ? and the position of the 1 indicates the word.
  • A random sentence of given length ? may be represented as a vector of ? random words.
  • A random graph on ? given vertices may be represented as a ? matrix of random variables, whose values specify the adjacency matrix of the random graph.
  • A random function ? may be represented as a collection of random variables ?, giving the function's values at the various points ? in the function's domain. The ? are ordinary real-valued random variables provided that the function is real-valued. For example, a stochastic process is a random function of time, a random vector is a random function of some index set such as ?, and random field is a random function on any set (typically time, space, or a discrete set).

Distribution functions

edit

If a random variable ? defined on the probability space ? is given, we can ask questions like "How likely is it that the value of ? is equal to 2?". This is the same as the probability of the event ? which is often written as ? or ? for short.

Recording all these probabilities of outputs of a random variable ? yields the probability distribution of ?. The probability distribution "forgets" about the particular probability space used to define ? and only records the probabilities of various output values of ?. Such a probability distribution, if ? is real-valued, can always be captured by its cumulative distribution function

?

and sometimes also using a probability density function, ?. In measure-theoretic terms, we use the random variable ? to "push-forward" the measure ? on ? to a measure ? on ?. The measure ? is called the "(probability) distribution of ?" or the "law of ?". [9] The density ?, the Radon–Nikodym derivative of ? with respect to some reference measure ? on ? (often, this reference measure is the Lebesgue measure in the case of continuous random variables, or the counting measure in the case of discrete random variables). The underlying probability space ? is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space ? altogether and just puts a measure on ? that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on quantile functions for fuller development.

Examples

edit

Discrete random variable

edit

Consider an experiment where a person is chosen at random. An example of a random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to their height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190?cm, or the probability that the height is either less than 150 or more than 200?cm.

Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum ?.

In examples such as these, the sample space is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed.

If ? are countable sets of real numbers, ? and ?, then ? is a discrete distribution function. Here ? for ?, ? for ?. Taking for instance an enumeration of all rational numbers as ? , one gets a discrete function that is not necessarily a step function (piecewise constant).

Coin toss

edit

The possible outcomes for one coin toss can be described by the sample space ?. We can introduce a real-valued random variable ? that models a $1 payoff for a successful bet on heads as follows: ?

If the coin is a fair coin, Y has a probability mass function ? given by: ?

Dice roll

edit
?
If the sample space is the set of possible numbers rolled on two dice, and the random variable of interest is the sum S of the numbers on the two dice, then S is a discrete random variable whose distribution is described by the probability mass function plotted as the height of picture columns here.

A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers n1 and n2 from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable X given by the function that maps the pair to the sum: ? and (if the dice are fair) has a probability mass function fX given by: ?

Continuous random variable

edit

Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere.[10] There are no "gaps", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value c (formally, ?) but there is a positive probability that its value will lie in particular intervals which can be arbitrarily small. Continuous random variables usually admit probability density functions (PDF), which characterize their CDF and probability measures; such distributions are also called absolutely continuous; but some continuous distributions are singular, or mixes of an absolutely continuous part and a singular part.

An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, X = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any range of values. For example, the probability of choosing a number in [0, 180] is 1?2. Instead of speaking of a probability mass function, we say that the probability density of X is 1/360. The probability of a subset of [0,?360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set.

More formally, given any interval ?, a random variable ? is called a "continuous uniform random variable" (CURV) if the probability that it takes a value in a subinterval depends only on the length of the subinterval. This implies that the probability of ? falling in any subinterval ? is proportional to the length of the subinterval, that is, if acdb, one has

?

where the last equality results from the unitarity axiom of probability. The probability density function of a CURV ? is given by the indicator function of its interval of support normalized by the interval's length: ?Of particular interest is the uniform distribution on the unit interval ?. Samples of any desired probability distribution ? can be generated by calculating the quantile function of ? on a randomly-generated number distributed uniformly on the unit interval. This exploits properties of cumulative distribution functions, which are a unifying framework for all random variables.

Mixed type

edit

A mixed random variable is a random variable whose cumulative distribution function is neither discrete nor everywhere-continuous.[10] It can be realized as a mixture of a discrete random variable and a continuous random variable; in which case the CDF will be the weighted average of the CDFs of the component variables.[10]

An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = ?1; otherwise X = the value of the spinner as in the preceding example. There is a probability of 1?2 that this random variable will have the value ?1. Other ranges of values would have half the probabilities of the last example.

Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see Lebesgue's decomposition theorem §?Refinement. The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers).

Measure-theoretic definition

edit

The most formal, axiomatic definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the Banach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of unions and/or intersections of such intervals.[11]

The measure-theoretic definition is as follows.

Let ? be a probability space and ? a measurable space. Then an ?-valued random variable is a measurable function ?, which means that, for every subset ?, its preimage is ?-measurable; ?, where ?.[12] This definition enables us to measure any subset ? in the target space by looking at its preimage, which by assumption is measurable.

In more intuitive terms, a member of ? is a possible outcome, a member of ? is a measurable subset of possible outcomes, the function ? gives the probability of each such measurable subset, ? represents the set of values that the random variable can take (such as the set of real numbers), and a member of ? is a "well-behaved" (measurable) subset of ? (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability.

When ? is a topological space, then the most common choice for the σ-algebra ? is the Borel σ-algebra ?, which is the σ-algebra generated by the collection of all open sets in ?. In such case the ?-valued random variable is called an ?-valued random variable. Moreover, when the space ? is the real line ?, then such a real-valued random variable is called simply a random variable.

Real-valued random variables

edit

In this case the observation space is the set of real numbers. Recall, ? is the probability space. For a real observation space, the function ? is a real-valued random variable if

?

This definition is a special case of the above because the set ? generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that ?.

Moments

edit

The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted ?, and also called the first moment. In general, ? is not equal to ?. Once the "average value" is known, one could then ask how far from this average value the values of ? typically are, a question that is answered by the variance and standard deviation of a random variable. ? can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations of ?.

Mathematically, this is known as the (generalised) problem of moments: for a given class of random variables ?, find a collection ? of functions such that the expectation values ? fully characterise the distribution of the random variable ?.

Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity function ? of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for a categorical random variable X that can take on the nominal values "red", "blue" or "green", the real-valued function ? can be constructed; this uses the Iverson bracket, and has the value 1 if ? has the value "green", 0 otherwise. Then, the expected value and other moments of this function can be determined.

Functions of random variables

edit

A new random variable Y can be defined by applying a real Borel measurable function ? to the outcomes of a real-valued random variable ?. That is, ?. The cumulative distribution function of ? is then

?

If function ? is invertible (i.e., ? exists, where ? is ?'s inverse function) and is either increasing or decreasing, then the previous relation can be extended to obtain

?

With the same hypotheses of invertibility of ?, assuming also differentiability, the relation between the probability density functions can be found by differentiating both sides of the above expression with respect to ?, in order to obtain[10]

?

If there is no invertibility of ? but each ? admits at most a countable number of roots (i.e., a finite, or countably infinite, number of ? such that ?) then the previous relation between the probability density functions can be generalized with

?

where ?, according to the inverse function theorem. The formulas for densities do not demand ? to be increasing.

In the measure-theoretic, axiomatic approach to probability, if a random variable ? on ? and a Borel measurable function ?, then ? is also a random variable on ?, since the composition of measurable functions is also measurable. (However, this is not necessarily true if ? is Lebesgue measurable.[citation needed]) The same procedure that allowed one to go from a probability space ? to ? can be used to obtain the distribution of ?.

Example 1

edit

Let ? be a real-valued, continuous random variable and let ?.

?

If ?, then ?, so

?

If ?, then

?

so

?

Example 2

edit

Suppose ? is a random variable with a cumulative distribution

?

where ? is a fixed parameter. Consider the random variable ? Then,

?

The last expression can be calculated in terms of the cumulative distribution of ? so

?

which is the cumulative distribution function (CDF) of an exponential distribution.

Example 3

edit

Suppose ? is a random variable with a standard normal distribution, whose density is

?

Consider the random variable ? We can find the density using the above formula for a change of variables:

?

In this case the change is not monotonic, because every value of ? has two corresponding values of ? (one positive and negative). However, because of symmetry, both halves will transform identically, i.e.,

?

The inverse transformation is

?

and its derivative is

?

Then,

?

This is a chi-squared distribution with one degree of freedom.

Example 4

edit

Suppose ? is a random variable with a normal distribution, whose density is

?

Consider the random variable ? We can find the density using the above formula for a change of variables:

?

In this case the change is not monotonic, because every value of ? has two corresponding values of ? (one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms:

?

The inverse transformation is

?

and its derivative is

?

Then,

?

This is a noncentral chi-squared distribution with one degree of freedom.

Some properties

edit
  • The probability distribution of the sum of two independent random variables is the convolution of each of their distributions.
  • Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures).

Equivalence of random variables

edit

There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution.

In increasing order of strength, the precise definition of these notions of equivalence is given below.

Equality in distribution

edit

If the sample space is a subset of the real line, random variables X and Y are equal in distribution (denoted ?) if they have the same distribution functions:

?

To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal moment generating functions have the same distribution. This provides, for example, a useful method of checking equality of certain functions of independent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a defined Laplace transform.

Almost sure equality

edit

Two random variables X and Y are equal almost surely (denoted ?) if, and only if, the probability that they are different is zero:

?

For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance:

?

where "ess sup" represents the essential supremum in the sense of measure theory.

Equality

edit

Finally, the two random variables X and Y are equal if they are equal as functions on their measurable space:

?

This notion is typically the least useful in probability theory because in practice and in theory, the underlying measure space of the experiment is rarely explicitly characterized or even characterizable.

Practical difference between notions of equivalence

edit

Since we rarely explicitly construct the probability space underlying a random variable, the difference between these notions of equivalence is somewhat subtle. Essentially, two random variables considered in isolation are "practically equivalent" if they are equal in distribution -- but once we relate them to other random variables defined on the same probability space, then they only remain "practically equivalent" if they are equal almost surely.

For example, consider the real random variables A, B, C, and D all defined on the same probability space. Suppose that A and B are equal almost surely (?), but A and C are only equal in distribution (?). Then ?, but in general ? (not even in distribution). Similarly, we have that the expectation values ?, but in general ?. Therefore, two random variables that are equal in distribution (but not equal almost surely) can have different covariances with a third random variable.

Convergence

edit

A significant theme in mathematical statistics consists of obtaining convergence results for certain sequences of random variables; for instance the law of large numbers and the central limit theorem.

There are various senses in which a sequence ? of random variables can converge to a random variable ?. These are explained in the article on convergence of random variables.

See also

edit

References

edit

Inline citations

edit
  1. ^ a b Blitzstein, Joe; Hwang, Jessica (2014). Introduction to Probability. CRC Press. ISBN?9781466575592.
  2. ^ Deisenroth, Marc Peter (2020). Mathematics for machine learning. A. Aldo Faisal, Cheng Soon Ong. Cambridge, United Kingdom: Cambridge University Press. ISBN?978-1-108-47004-9. OCLC?1104219401.
  3. ^ George Mackey (July 1980). "Harmonic analysis as the exploitation of symmetry – a historical survey". Bulletin of the American Mathematical Society. New Series. 3 (1).
  4. ^ "Random Variables". www.mathsisfun.com. Retrieved 2025-08-14.
  5. ^ Yates, Daniel S.; Moore, David S; Starnes, Daren S. (2003). The Practice of Statistics (2nd?ed.). New York: Freeman. ISBN?978-0-7167-4773-4. Archived from the original on 2025-08-14.
  6. ^ "Random Variables". www.stat.yale.edu. Retrieved 2025-08-14.
  7. ^ Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuha?, Hendrik Paul; Meester, Ludolf Erwin (2005). "A Modern Introduction to Probability and Statistics". Springer Texts in Statistics. doi:10.1007/1-84628-168-7. ISBN?978-1-85233-896-1. ISSN?1431-875X.
  8. ^ L. Casta?eda; V. Arunachalam & S. Dharmaraja (2012). Introduction to Probability and Stochastic Processes with Applications. Wiley. p.?67. ISBN?9781118344941.
  9. ^ Billingsley, Patrick (1995). Probability and Measure (3rd?ed.). Wiley. p.?187. ISBN?9781466575592.
  10. ^ a b c d Bertsekas, Dimitri P. (2002). Introduction to Probability. Tsitsiklis, John N., Τσιτσικλ??, Γι?ννη? Ν. Belmont, Mass.: Athena Scientific. ISBN?188652940X. OCLC?51441829.
  11. ^ Steigerwald, Douglas G. "Economics 245A – Introduction to Measure Theory" (PDF). University of California, Santa Barbara. Retrieved April 26, 2013.
  12. ^ Fristedt & Gray (1996, page 11)

Literature

edit
edit
什么花在春天开 排卵期在什么时候 火烧云是什么意思 感冒喝什么饮料 胃酸是什么原因造成的
sharon是什么意思 势不可挡是什么意思 iod什么意思 苏打水喝了有什么好处 美女什么都没有穿
恩施玉露属于什么茶 屁股两边疼是什么原因 红楼梦什么朝代 什么叫生化流产 彩色相片什么时候出现
壬字五行属什么 护理专业是什么 黑莲花是什么意思 水样便腹泻是什么引起 屿是什么意思
rbp是什么意思hcv9jop3ns1r.cn 拔罐有什么好处hcv8jop5ns7r.cn 艾附暖宫丸什么时候吃zhongyiyatai.com 胎盘有什么用hcv8jop9ns2r.cn 热络是什么意思hcv8jop5ns7r.cn
后背长痘痘是什么原因引起的hcv8jop1ns8r.cn 香客是什么意思hcv7jop5ns6r.cn 红字五行属什么hcv9jop5ns7r.cn 翠玉是什么玉hcv8jop6ns6r.cn 联名款是什么意思hcv9jop0ns7r.cn
zw是什么意思hcv8jop8ns6r.cn 晓五行属什么hcv9jop6ns9r.cn 吃什么排铅效果最好hcv8jop8ns4r.cn 和尚代表什么生肖hcv8jop6ns1r.cn 血压低吃什么hcv7jop5ns1r.cn
广州为什么叫花城hcv8jop5ns1r.cn onlycook是什么牌子hcv8jop5ns2r.cn 牛膝有什么功效hcv8jop7ns7r.cn 血小板下降是什么原因hcv9jop5ns3r.cn adh医学上是什么意思hcv8jop0ns1r.cn
百度