跖疣是什么原因造成的| 猫为什么不怕蛇| 丁毒豆泡酒能治什么病| 左腹下方隐痛什么原因| 同工同酬是什么意思| 酒量越来越差什么原因| 年终奖一般什么时候发| 世界上最难的数学题是什么| 菠菜是什么意思| 正因数是什么| 孝庄是康熙的什么人| 精神心理科主要治疗什么疾病| 淞字五行属什么| 司局级是什么级别| 血小板低吃什么好补| 沙棘有什么功效| 迎春花像什么| 萎靡什么意思| 为什么会耳鸣| 大肠杆菌用什么药治疗效果好| 猪肉馅饺子配什么菜| 过敏性荨麻疹吃什么药| 为什么活得这么累| 摩羯座是什么星象| 倍感欣慰是什么意思| 牙齿为什么会痛| 脚后跟疼为什么| 天蝎座是什么象| 野鸡吃什么食物| 陈皮泡酒喝有什么功效和作用| 鲁迅原名什么| 淋巴细胞是什么| 散步有什么好处| 生物制剂对人体有什么副作用| 地图舌吃什么好得快| 世界第一大河是什么河| 为什么晚上不能扫地| 什么烟最贵| 强的松又叫什么名字| 董酒是什么香型| 潮喷是什么感觉| 血脂高低看什么指标| 喝水有什么好处| 没经验开什么店最简单| 筱的意思是什么| 后背中心疼是什么原因| 梦见自己家被盗有什么预兆| 吃无花果有什么好处和坏处| 雷诺综合症是什么病| 蜈蚣最怕什么药| 白酒兑什么好喝| 冬天送什么礼物| 梦见修路是什么预兆| 没晨勃说明什么问题| 黑裤子配什么颜色上衣| 春节的习俗是什么| 一什么知什么成语| 刻板印象是什么意思| 专科医院是什么意思| 子宫肌瘤吃什么食物好| 水绿色是什么颜色| 银装素裹是什么意思| 艾滋病的症状是什么| 洋葱吃了有什么好处| 月经不来挂什么科| 步步为营是什么意思| 织女是什么生肖| 行李箱什么材质的好| 益安宁丸主治什么病| 头什么脚什么| 水晶和玻璃有什么区别| 阿飘是什么意思| 刻舟求剑的意思是什么| 硬膜囊前缘受压是什么意思| ln是什么| 2006属狗的五行缺什么| 狐臭挂什么科室的号| 大便颜色发黑是什么原因| 念珠菌性阴道炎有什么症状| 怕得什么| 手指尖疼是什么原因| 割包皮挂什么科| 白细胞高什么原因| 小肺大泡是什么意思| 风代表什么数字| 击剑什么意思| 书字五行属什么| 动脉硬化吃什么药| 花斑癣用什么药膏| 理发师代表什么生肖| 大林木命忌讳什么颜色| agoni什么意思| 一直打嗝是什么问题| 嗜是什么意思| 式可以加什么偏旁| 请佛像回家有什么讲究| 什么是初吻| 手掌疼是什么原因| 萎缩性胃炎吃什么食物好| 免疫力低挂什么科| 摩羯座哭了代表什么| 闷骚什么意思| geo是什么意思| 跟班是什么意思| 女孩子学什么专业| 什么时候闰十二月| 吃芒果有什么好处| 阴道瘙痒是什么原因造成的| 吃什么能帮助睡眠| 三个句号代表什么意思| 膳食纤维有什么作用| 画蛇添足告诉我们什么道理| 掉头发吃什么恢复最快| 坐骨神经痛是什么原因引起的| 月经准时来说明什么| 狮子座是什么时候| 黄体是什么| 苦涩是什么意思| 意识是什么| bc什么意思| 生是什么生肖| 火气旺盛有什么症状| 型男是什么意思| 诏安是什么意思| 化痰祛痰吃什么药| 数字3代表什么意思| 口干舌燥吃什么药最好| 枸橼酸西地那非片是什么药| 氟哌酸是什么药| 间接是什么意思| 肝肾功能检查挂什么科| 内外兼修是什么意思| 地中海贫血有什么症状| 大专什么专业就业前景好| 口吐白沫是什么病| 百合长什么样子| 34属什么| 放屁是热的是什么原因| 手麻是什么病的预兆| 2.16是什么星座| 分泌物多是什么原因| 莫言是什么学历| 女性长胡子是什么原因| 玛丽苏是什么意思| 打升白针有什么副作用| 膈是什么器官| 醒酒最快的方法是什么| 为什么短信验证码收不到| 上火是什么症状| 胃息肉是什么原因引起的| 咦是什么意思| 颈椎钙化是什么意思| 先兆临产是什么意思| 口腔溃疡用什么药好得快| 癫痫吃什么药| 盥洗是什么意思| 交警支队长是什么级别| 黑道日为什么还是吉日| 维生素B1有什么副作用| 现在什么手机好用| 四两棉花歇后语是什么| 三秋是什么意思| 消防大队长是什么级别| 周吴郑王是什么意思| 腹部包块是什么样子的| 886是什么意思| 胃痉挛吃什么药好| 核素是什么| 单纯性肥胖是什么意思| 西瓜红是什么颜色| 头皮屑特别多是什么原因| 身上为什么会起湿疹| 栉风沐雨什么意思| 蚕屎做枕头有什么好处| 骨密度挂什么科| 血容量不足是什么意思| qd医学上是什么意思| 马牛羊鸡犬豕中的豕指的是什么| 为什么会长阴虱| 女人出黄汗是什么原因| 嗪读什么| save是什么意思| 有什么有什么四字词语| 湍急是什么意思| 手指头发麻是什么原因| 唐朝什么时候灭亡的| 去除扁平疣用什么药膏| 单核细胞百分比偏高是什么原因| 梦见自己掉头发是什么征兆| 橘子什么季节成熟| 胡萝卜补充什么维生素| 紫癜有什么症状| 什么补血快| 什么鬼大家都喜欢| 空调吹感冒吃什么药| 霍建华为什么娶林心如| 梭织是什么意思| 28年是什么婚| siri什么意思| d二聚体是查什么的| 7.28是什么星座| 膝盖内侧疼吃什么药| 为什么会有甲状腺结节| 腰椎间盘突出看什么科| 陈皮的功效与作用主要治什么病| 三昧什么意思| 是什么样的感觉我不懂是什么歌| 醉酒第二天吃什么才能缓解难受| 中风的人吃什么好| 毛刺是什么意思| 喝陈皮有什么好处| 火加同念什么| 什么的山| 诺诺是什么意思| 严什么的态度| 咽炎吃什么| 乳腺增生吃什么药效果好| 茵陈和什么泡水喝对肝脏最好| 梦见喝水是什么意思| 小孩肠套叠什么症状| 痰是棕色的是什么原因| amount是什么意思| 胡萝卜不能和什么食物一起吃| 樟脑是什么东西| 梦见蛇咬我是什么意思| 什么时候开始降温| 痰湿阻滞吃什么中成药| 男人为什么喜欢女人| 招蚊子咬是什么血型| 碱性磷酸酶高是什么意思| 送男孩子什么礼物比较好| 牙龈出血缺什么| 血脂高吃什么降血脂| 湿热体质吃什么药| 5月20日什么星座| 每天吃葡萄有什么好处和坏处| 十月一日什么星座| 18罗汉都叫什么名字| 低钾是什么原因造成的| 老人家脚肿是什么原因引起的| 葫芦为什么会苦| 最近老做噩梦是什么原因| 撸铁是什么意思| co什么意思| 沙示汽水有什么功效| 肺部结节挂什么科| o是什么| 什么牌子奶粉最好| 负压是什么意思| 2月18号是什么星座| 身体怕热是什么原因| 四象是什么| 人体最大的排毒器官是什么| 指甲盖凹凸不平是什么原因| 嘴歪是什么病的前兆| 什么是直辖市| 酱油的原料是什么| 仙风道骨指什么生肖| 牙齿掉了一小块是什么原因| 美人尖是什么| 煜这个字读什么| 甲功七项挂什么科| 末那识是什么意思| 肠阻塞有什么症状| 百度

十堰市竹山县境S454(三巨线)K13+800处交通中断

百度 消息一经公布,便惹得粉丝纷纷期待起来。

In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample.[2][3] Probability density is the probability per unit length, in other words. While the absolute likelihood for a continuous random variable to take on any particular value is zero, given there is an infinite set of possible values to begin with. Therefore, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.

Box plot and probability density function of a normal distribution N(0,?σ2).
Geometric visualisation of the mode, median and mean of an arbitrary unimodal probability density function.[1]

More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. This probability is given by the integral of a continuous variable's PDF over that range, where the integral is the nonnegative area under the density function between the lowest and greatest values of the range. The PDF is nonnegative everywhere, and the area under the entire curve is equal to one, such that the probability of the random variable falling within the set of possible values is 100%.

The terms probability distribution function and probability function can also denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function (CDF), or it may be a probability mass function (PMF) rather than the density. Density function itself is also used for the probability mass function, leading to further confusion.[4] In general the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.

Example

edit
?
Examples of four continuous probability density functions.

Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives exactly 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on.

In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour?1). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour?1. This quantity 2 hour?1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour?1) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour?1)×(1 nanosecond) ≈ 6×10?13 (using the unit conversion 3.6×1012 nanoseconds = 1 hour).

There is a probability density function f with f(5 hours) = 2 hour?1. The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window.

Absolutely continuous univariate distributions

edit

A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable ? has density ?, where ? is a non-negative Lebesgue-integrable function, if: ?

Hence, if ? is the cumulative distribution function of ?, then: ? and (if ? is continuous at ?) ?

Intuitively, one can think of ? as being the probability of ? falling within the infinitesimal interval ?.

Formal definition

edit

(This definition may be extended to any probability distribution using the measure-theoretic definition of probability.)

A random variable ? with values in a measurable space ? (usually ? with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X?P on ?: the density of ? with respect to a reference measure ? on ? is the Radon–Nikodym derivative: ?

That is, f is any measurable function with the property that: ?or any measurable set ?

Discussion

edit

In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof).

It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere.

Further details

edit

Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval [0, 1/2] has probability density f(x) = 2 for 0 ≤ x ≤ 1/2 and f(x) = 0 elsewhere.

The standard normal distribution has probability density?

If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as?

Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point.

A distribution has a density function if its cumulative distribution function F(x) is absolutely continuous.[5] In this case: F is almost everywhere differentiable, and its derivative can be used as probability density:?

If a probability distribution admits a density, then the probability of every one-point set {a} is zero; the same holds for finite and countable sets.

Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero.

In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:

If dt is an infinitely small number, the probability that X is included within the interval (t, t + dt) is equal to f(t) dt, or: ?

edit

It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking ?1 or 1 for values, with probability 1?2 each. The density of probability associated with this variable is: ?

More generally, if a discrete variable can take n different values among real numbers, then the associated probability density function is: ? where ? are the discrete values accessible to the variable and ? are the probabilities associated with these values.

This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability.

Families of densities

edit

It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by ? and ? respectively, giving the family of densities ? Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution.

Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones.

Densities associated with multiple variables

edit

For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is ?

If F(x1, ..., xn) = Pr(X1x1, ..., Xnxn) is the cumulative distribution function of the vector (X1, ..., Xn), then the joint probability density function can be computed as a partial derivative ?

Marginal densities

edit

For i = 1, 2, ..., n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X1, ..., Xn by integrating over all values of the other n ? 1 variables: ?

Independence

edit

Continuous random variables X1, ..., Xn admitting a joint density are all independent from each other if ?

Corollary

edit

If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable ? (where each fi is not necessarily a density) then the n variables in the set are all independent from each other, and the marginal probability density function of each of them is given by ?

Example

edit

This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call ? a 2-dimensional random vector of coordinates (X, Y): the probability to obtain ? in the quarter plane of positive x and y is ?

Function of random variables and change of variables in the probability density function

edit

If the probability density function of a random variable (or vector) X is given as fX(x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g(X). This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape fg(X) = fY using a known (for instance, uniform) random number generator.

It is tempting to think that in order to find the expected value E(g(X)), one must first find the probability density fg(X) of the new random variable Y = g(X). However, rather than computing ? one may find instead ?

The values of the two integrals are the same in all cases in which both X and g(X) actually have probability density functions. It is not necessary that g be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician.

Scalar to scalar

edit

Let ? be a monotonic function, then the resulting density function is[6] ?

Here g?1 denotes the inverse function.

This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, ? or ?

For functions that are not monotonic, the probability density function for y is ? where n(y) is the number of solutions in x for the equation ?, and ? are these solutions.

Vector to vector

edit

Suppose x is an n-dimensional random variable with joint density f. If y = G(x), where G is a bijective, differentiable function, then y has density pY: ? with the differential regarded as the Jacobian of the inverse of G(?), evaluated at y.[7]

For example, in the 2-dimensional case x = (x1, x2), suppose the transform G is given as y1 = G1(x1, x2), y2 = G2(x1, x2) with inverses x1 = G1?1(y1, y2), x2 = G2?1(y1, y2). The joint distribution for y?= (y1,?y2) has density[8] ?

Vector to scalar

edit

Let ? be a differentiable function and ? be a random vector taking values in ?, ? be the probability density function of ? and ? be the Dirac delta function. It is possible to use the formulas above to determine ?, the probability density function of ?, which will be given by ?

This result leads to the law of the unconscious statistician: ?

Proof:

Let ? be a collapsed random variable with probability density function ? (i.e., a constant equal to zero). Let the random vector ? and the transform ? be defined as ?

It is clear that ? is a bijective mapping, and the Jacobian of ? is given by: ? which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that ? which if marginalized over ? leads to the desired probability density function.

Sums of independent random variables

edit

The probability density function of the sum of two independent random variables U and V, each of which has a probability density function, is the convolution of their separate density functions: ?

It is possible to generalize the previous relation to a sum of N independent random variables, with densities U1, ..., UN: ?

This can be derived from a two-way change of variables involving Y = U + V and Z = V, similarly to the example below for the quotient of independent random variables.

Products and quotients of independent random variables

edit

Given two independent random variables U and V, each of which has a probability density function, the density of the product Y = UV and quotient Y = U/V can be computed by a change of variables.

Example: Quotient distribution

edit

To compute the quotient Y = U/V of two independent random variables U and V, define the following transformation: ?

Then, the joint density p(y,z) can be computed by a change of variables from U,V to Y,Z, and Y can be derived by marginalizing out Z from the joint density.

The inverse transformation is ?

The absolute value of the Jacobian matrix determinant ? of this transformation is: ?

Thus: ?

And the distribution of Y can be computed by marginalizing out Z: ?

This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because Z can be mapped directly back to V, and for a given V the quotient U/V is monotonic. This is similarly the case for the sum U + V, difference U ? V and product UV.

Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables.

Example: Quotient of two standard normals

edit

Given two standard normal variables U and V, the quotient can be computed as follows. First, the variables have the following density functions: ?

We transform as described above: ?

This leads to: ?

This is the density of a standard Cauchy distribution.

See also

edit

References

edit
  1. ^ "AP Statistics Review - Density Curves and the Normal Distributions". Archived from the original on 2 April 2015. Retrieved 16 March 2015.
  2. ^ Grinstead, Charles M.; Snell, J. Laurie (2009). "Conditional Probability - Discrete Conditional" (PDF). Grinstead & Snell's Introduction to Probability. Orange Grove Texts. ISBN?978-1616100469. Archived (PDF) from the original on 2025-08-14. Retrieved 2025-08-14.
  3. ^ "probability - Is a uniformly random number over the real line a valid distribution?". Cross Validated. Retrieved 2025-08-14.
  4. ^ Ord, J.K. (1972) Families of Frequency Distributions, Griffin. ISBN?0-85264-137-0 (for example, Table 5.1 and Example 5.4)
  5. ^ Scalas, Enrico (2025). Introduction to Probability Theory for Economists (PDF). self-published. p.?28. Archived (PDF) from the original on Dec 10, 2024. Retrieved July 30, 2025.
  6. ^ Siegrist, Kyle (5 May 2020). "Transformations of Random Variables". LibreTexts Statistics. Retrieved 22 December 2023.
  7. ^ Devore, Jay L.; Berk, Kenneth N. (2007). Modern Mathematical Statistics with Applications. Cengage. p.?263. ISBN?978-0-534-40473-4.
  8. ^ David, Stirzaker (2025-08-14). Elementary Probability. Cambridge University Press. ISBN?978-0521534284. OCLC?851313783.

Further reading

edit
edit
排卵期同房后要注意什么 hpv11阳性是什么意思 ace是什么意思 多汗症去医院挂什么科 科目一考试需要带什么
广西有什么水果 钙化淋巴结是什么意思 地中海贫血有什么症状 查艾滋病挂什么科 鸡蛋加什么吃壮阳持久
尿道感染流脓吃什么药 两个人在一起的意义是什么 月经前便秘是什么原因 奇葩是什么意思 耳聋是什么原因引起的
肺活量是什么意思 naprogesic是什么药 美国人喜欢什么颜色 马赛克什么意思 low什么意思
性生活过多有什么危害hcv8jop2ns9r.cn 泰州有什么好玩的地方hcv9jop2ns2r.cn 看望病人送什么东西hanqikai.com 七月份有什么水果hcv8jop0ns1r.cn opt是什么hcv9jop7ns4r.cn
为什么会连续两天遗精cj623037.com 1960年属鼠的是什么命hcv7jop5ns3r.cn 善良是什么520myf.com 胆固醇高挂什么科wmyky.com 奋笔疾书的疾是什么意思luyiluode.com
潜血阳性是什么意思hcv9jop3ns6r.cn 婴儿口水多是什么原因hcv8jop0ns5r.cn 小插曲是什么意思hcv8jop9ns7r.cn med是什么意思imcecn.com 头七有什么规矩hcv8jop1ns5r.cn
全腹部ct平扫主要检查什么cj623037.com 爸爸的爸爸叫什么儿歌hcv8jop4ns2r.cn 孩子记忆力差是什么原因gangsutong.com 看膝盖挂什么科hcv9jop2ns4r.cn 梦见西红柿是什么预兆hcv9jop6ns6r.cn
百度