遣返是什么意思| 医学检验是干什么的| 梦见死去的姥姥是什么意思| 以梦为马是什么意思| 虫草泡水喝有什么功效| 血糖高能喝什么粥| 钦点是什么意思| 什么药膏能让疣体脱落| 散瞳后需要注意什么| ccp抗体是什么意思| 孩子发烧呕吐是什么原因| 来曲唑片什么时候吃最好| 嗳气是什么症状| tbs和tct有什么区别| 五月三十一号是什么星座| 胆结石是什么症状| polo衫是什么| 假花放在家里有什么忌讳| 吃干饭是什么意思| MP是什么| 八仙桌是什么生肖| 什么是腺肌症| 蟋蟀用什么唱歌| 一个虫一个冉读什么| 心肌供血不足吃什么药| 高铁列车长是什么级别| 吃什么可以祛斑| 9号来的月经什么时候是排卵期| 胃有问题挂什么科| 咳嗽喝什么饮料| 什么是通勤| 3月15号是什么星座| 个人送保是什么意思| 5点到7点是什么时辰| 口苦吃什么药最有效| 儿童正常体温在什么范围| 孔雀男是什么意思| 家什是什么意思| 溢水是什么意思| 骨头坏死是什么感觉| 吃什么治疗湿气重| 意中人是什么意思| 鱼油是什么| 肾脏炎有什么症状| 女人晚上盗汗是什么原因| 对峙什么意思| 猪宝是什么东西| 征求是什么意思| 1.25什么星座| 男生属鸡和什么属相配| 跪乳的动物是什么生肖| 肝内多发钙化灶是什么意思| es什么意思| 水珠像什么| 什么实实| 他达拉非是什么药| 为什么屎是黑色的| 心外科是看什么病的| 小马是什么牌子| 什么茶提神| 看舌头应该挂什么科| 球菌阳性是什么意思| 石五行属什么| tin是什么| 高考都考什么| 高压和低压差值在什么范围正常| 不锈钢肥皂是什么原理| 绞股蓝有什么作用| 灰指甲是什么样的| 手麻是什么引起的| 宫腔粘连有什么症状| 血红素是什么| tga是什么意思| 转卖是什么意思| 现在吃什么水果| 体脂是什么意思| 栉风沐雨什么意思| 女性尿道出血是什么原因引起的| 残局是什么意思| 卷腹是什么| 手指尖发麻是什么原因| 兵解是什么意思| 什么是碱| 吃什么皮肤变白| 嗜碱性粒细胞偏高是什么原因| 知了为什么一直叫| 甲状腺低密度结节是什么意思| 不建议什么意思| 冰粉是什么| 膨鱼鳃用什么搭配煲汤| bn是什么颜色| 肾结石术后吃什么食物最好| a216是什么材质| 女人绝经一般在什么年龄段| 7月20号什么星座| 日落西山是什么生肖| 幽门螺杆菌抗体阳性什么意思| 月经血是什么血| 驾驶证和行驶证有什么区别| 孩子咽炎老是清嗓子吃什么药| 什么是巧克力囊肿| 梦见别人拉屎是什么意思| 彪子是什么意思| 头皮特别痒是什么原因| 胎盘分级0级什么意思| 疱疹是什么原因长的| 黄瓜籽粉有什么作用| 成人发烧38度吃什么药| 白塞氏病是什么病| 脑梗是由什么引起的| 金箔是什么| 薄情是什么意思| 国师是什么职位| 康熙是乾隆的什么人| tvt是什么意思| 什么桥下没有水| 化疗是什么意思| 中央党校什么级别| 妇科检查清洁度3度什么意思| 老人身上痒是什么原因| 什么筷子不发霉又健康| 感冒发烧吃什么水果| 脚褪皮是什么原因| 恶风是什么意思| 女生真空是什么意思| 兰州大学什么专业最好| 紫色代表什么| 感冒了吃什么水果比较好| 心慌心跳吃什么药| 立是什么意思| 易烊千玺是什么星座| esd是什么意思| 促黄体生成素低说明什么| cpc是什么| 喉部有异物感是什么病| 诺什么意思| 疳积有什么症状| 立冬是什么时候| 凝血功能差是什么原因| 红蜘蛛用什么药最有效| 水瓶座什么象| 恐惧症吃什么药最好| 下头是什么意思| 孕妇头晕是什么原因| 华胥是什么意思| 吃什么能生发| 接吻什么感觉| 麦冬什么时候种植| 1971年属猪的是什么命| 淋巴是什么东西| 为什么头发总是很油| 喝酒为什么会头疼| 失信名单有什么影响| 行云流水是什么意思| 为什么一动就出汗| 轻微脑震荡吃什么药| 肉便器是什么东西| 上火引起的喉咙痛吃什么药| 天门冬氨酸氨基转移酶是什么| 看什么| 吃什么对肝好怎么养肝| 什么是质子重离子治疗| 肝经不通吃什么中成药| 吓得什么填空| 凉皮用什么做的| 拉肚子吃什么食物好得快| 施华蔻属于什么档次| 浩瀚是什么意思| 茶水费是什么意思| 乳房硬块疼是什么原因| 人活着是为了什么| 玉米什么时候成熟| 君子兰不开花是什么原因| 办离婚证需要带什么证件| 什么冰冰| 什么人不适合做厨师| 肝病不能吃什么| 水险痣是什么意思| 坐骨神经痛是什么原因引起的| 客厅用什么灯具好| 小鸡仔吃什么| 低血糖看什么科室| 坐月子吃什么好| 梦见在河里抓鱼是什么征兆| ccb是什么药物| 冬眠灵是什么药| 凝神是什么意思| 巨蟹座幸运花是什么| 孕妇吃什么牌子奶粉| 蜜袋鼯吃什么| 肺结节吃什么好| 梦见盖新房子是什么意思| 低压高用什么药| 火烧是什么| 前列腺炎不能吃什么| 天蝎座后面是什么星座| 什么样的草地| 宸字属于五行属什么| 属兔带什么招财| 阿托伐他汀治什么病| 含蓄什么意思| 法院是什么机关| 羽字属于五行属什么| 两胸之间是什么部位| 靶子是什么意思| 白细胞少了会得什么病| 滚床单什么意思| pdc是什么意思| 尿蛋白微量是什么意思| 彩虹像什么挂在天空| 长征是什么意思| maggie是什么意思| 小妮子是什么意思| 庆字五行属什么| 水仙什么意思| 推测是什么意思| 疱疹有什么症状表现| 胎心胎芽最晚什么时候出现| 2月20日是什么星座| 梭边鱼是什么鱼| 人吃什么才能长胖| 蝉长什么样| 卒中中心是干什么的| 春占生女是什么意思| 瑶字五行属什么| 南瓜不能和什么一起吃| 为什么拉屎是黑色的| 中午1点是什么时辰| 吃什么不会胖又减肥| 爱啃指甲是什么原因| 阿西吧是什么意思| 天天喝豆浆有什么好处和坏处| dce是什么溶剂| 胃窦炎是什么原因引起的| 花甲是什么意思| hcg稀释是什么意思| 卖点是什么意思| 石敢当是什么意思| 5月15日是什么星座| 绿主是什么意思| 什么东西不能托运| 跖疣念什么字| 女单读什么| 五福临门是什么生肖| 胜字五行属什么| 月经老是推后是什么原因| 烂仔是什么意思| 蝉喜欢吃什么| 少腹是什么意思| KH是什么| 输卵管堵塞是什么原因造成的| 俗气是什么意思| 人参长什么样子图片| 快穿是什么意思| 补钙吃什么食物最好最快中老年| 慢性萎缩性胃炎伴糜烂吃什么药| 杨公忌日是什么意思| 复方甘草酸苷片治什么病| 什么然而至| 出局是什么意思| 什么的爬| 七一年属什么| 在五行中属什么| 吃什么水果可以减肥| 百度

“赴金陵寻古今”研学实践课南京站圆满举行

百度 这是江北高新区一直盼望的商业配套,地块周边的中建国熙台、正荣润江城、雅居乐滨江国际等业主都要偷着乐了。

In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased (see bias versus consistency for more).

All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because a biased estimator may be unbiased with respect to different measures of central tendency; because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful.

Bias can also be measured with respect to the median, rather than the mean (expected value), in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. Mean-unbiasedness is not preserved under non-linear transformations, though median-unbiasedness is (see §?Effect of transformations); for example, the sample variance is a biased estimator for the population variance. These are all illustrated below.

An unbiased estimator for a parameter need not always exist. For example, there is no unbiased estimator for the reciprocal of the parameter of a binomial random variable.[1]

Definition

edit

Suppose we have a statistical model, parameterized by a real number θ, giving rise to a probability distribution for observed data, ?, and a statistic ? which serves as an estimator of θ based on any observed data ?. That is, we assume that our data follows some unknown distribution ? (where θ is a fixed, unknown constant that is part of this distribution), and then we construct some estimator ? that maps observed data to values that we hope are close to θ. The bias of ? relative to ? is defined as[2]

?

where ? denotes expected value over the distribution ? (i.e., averaging over all possible observations ?). The second equation follows since θ is measurable with respect to the conditional distribution ?.

An estimator is said to be unbiased if its bias is zero for all values of the parameter θ, or equivalently, if the expected value of the estimator matches that of the parameter.[3] Unbiasedness is not guaranteed to carry over. For example, if ? is an unbiased estimator for parameter θ, it is not guaranteed in general that g(?) is an unbiased estimator for g(θ), unless g is a linear function.[4]

In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the mean signed difference.

Examples

edit

Sample variance

edit

The sample variance of a random variable demonstrates two aspects of estimator bias: firstly, the naive estimator is biased, which can be corrected by a scale factor; second, the unbiased estimator is not optimal in terms of mean squared error (MSE), which can be minimized by using a different scale factor, resulting in a biased estimator with lower MSE than the unbiased estimator. Concretely, the naive estimator sums the squared deviations and divides by n, which is biased. Dividing instead by n???1 yields an unbiased estimator. Conversely, MSE can be minimized by dividing by a different number (depending on distribution), but this results in a biased estimator. This number is always larger than n???1, so this is known as a shrinkage estimator, as it "shrinks" the unbiased estimator towards zero; for the normal distribution the optimal value is n?+?1.

Suppose X1, ..., Xn are independent and identically distributed (i.i.d.) random variables with expectation μ and variance σ2. If the sample mean and uncorrected sample variance are defined as

?

then S2 is a biased estimator of σ2, because

?

To continue, we note that by subtracting ? from both sides of ?, we get

?

Meaning, (by cross-multiplication) ?. Then, the previous becomes:

?

This can be seen by noting the following formula, which follows from the Bienaymé formula, for the term in the inequality for the expectation of the uncorrected sample variance above: ?.

In other words, the expected value of the uncorrected sample variance does not equal the population variance σ2, unless multiplied by a normalization factor. The sample mean, on the other hand, is an unbiased[5] estimator of the population mean?μ.[3]

Note that the usual definition of sample variance is ?, and this is an unbiased estimator of the population variance.

Algebraically speaking, ? is unbiased because:

?

where the transition to the second line uses the result derived above for the biased estimator. Thus ?, and therefore ? is an unbiased estimator of the population variance, σ2. The ratio between the biased (uncorrected) and unbiased estimates of the variance is known as Bessel's correction.

The reason that an uncorrected sample variance, S2, is biased stems from the fact that the sample mean is an ordinary least squares (OLS) estimator for μ: ? is the number that makes the sum ? as small as possible. That is, when any other number is plugged into this sum, the sum can only increase. In particular, the choice ? gives,

?

and then

?

The above discussion can be understood in geometric terms: the vector ? can be decomposed into the "mean part" and "variance part" by projecting to the direction of ? and to that direction's orthogonal complement hyperplane. One gets ? for the part along ? and ? for the complementary part. Since this is an orthogonal decomposition, Pythagorean theorem says ?, and taking expectations we get ?, as above (but times ?). If the distribution of ? is rotationally symmetric, as in the case when ? are sampled from a Gaussian, then on average, the dimension along ? contributes to ? equally as the ? directions perpendicular to ?, so that ? and ?. This is in fact true in general, as explained above.

Estimating a Poisson probability

edit

A far more extreme case of a biased estimator being better than any unbiased estimator arises from the Poisson distribution.[6][7] Suppose that X has a Poisson distribution with expectation?λ. Suppose it is desired to estimate

?

with a sample of size 1. (For example, when incoming calls at a telephone switchboard are modeled as a Poisson process, and λ is the average number of calls per minute, then e?2λ (the estimand) is the probability that no calls arrive in the next two minutes.)

Since the expectation of an unbiased estimator δ(X) is equal to the estimand, i.e.

?

the only function of the data constituting an unbiased estimator is

?

To see this, note that when decomposing e?λ from the above expression for expectation, the sum that is left is a Taylor series expansion of e?λ as well, yielding e?λe?λ?=?e?2λ (see Characterizations of the exponential function).

If the observed value of X is 100, then the estimate is 1, although the true value of the quantity being estimated is very likely to be near 0, which is the opposite extreme. And, if X is observed to be 101, then the estimate is even more absurd: It is ?1, although the quantity being estimated must be positive.

The (biased) maximum likelihood estimator

?

is far better than this unbiased estimator. Not only is its value always positive but it is also more accurate in the sense that its mean squared error

?

is smaller; compare the unbiased estimator's MSE of

?

The MSEs are functions of the true value?λ. The bias of the maximum-likelihood estimator is:

?

Maximum of a discrete uniform distribution

edit

The bias of maximum-likelihood estimators can be substantial. Consider a case where n tickets numbered from 1 to n are placed in a box and one is selected at random, giving a value X. If n is unknown, then the maximum-likelihood estimator of n is X, even though the expectation of X given n is only (n?+?1)/2; we can be certain only that n is at least X and is probably more. In this case, the natural unbiased estimator is 2X???1.

Median-unbiased estimators

edit

The theory of median-unbiased estimators was revived by George W. Brown in 1947:[8]

An estimate of a one-dimensional parameter θ will be said to be median-unbiased, if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation.

Further properties of median-unbiased estimators have been noted by Lehmann, Birnbaum, van der Vaart and Pfanzagl.[9] In particular, median-unbiased estimators exist in cases where mean-unbiased and maximum-likelihood estimators do not exist. They are invariant under one-to-one transformations.

There are methods of construction median-unbiased estimators for probability distributions that have monotone likelihood-functions, such as one-parameter exponential families, to ensure that they are optimal (in a sense analogous to minimum-variance property considered for mean-unbiased estimators).[10][11] One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao–Blackwell procedure for mean-unbiased estimation but for a larger class of loss-functions.[11]

Bias with respect to other loss functions

edit

Any minimum-variance mean-unbiased estimator minimizes the risk (expected loss) with respect to the squared-error loss function (among mean-unbiased estimators), as observed by Gauss.[12] A minimum-average absolute deviation median-unbiased estimator minimizes the risk with respect to the absolute loss function (among median-unbiased estimators), as observed by Laplace.[12][13] Other loss functions are used in statistics, particularly in robust statistics.[12][14]

Effect of transformations

edit

For univariate parameters, median-unbiased estimators remain median-unbiased under transformations that preserve order (or reverse order). Note that, when a transformation is applied to a mean-unbiased estimator, the result need not be a mean-unbiased estimator of its corresponding population statistic. By Jensen's inequality, a convex function as transformation will introduce positive bias, while a concave function will introduce negative bias, and a function of mixed convexity may introduce bias in either direction, depending on the specific function and distribution. That is, for a non-linear function f and a mean-unbiased estimator U of a parameter p, the composite estimator f(U) need not be a mean-unbiased estimator of f(p). For example, the square root of the unbiased estimator of the population variance is not a mean-unbiased estimator of the population standard deviation: the square root of the unbiased sample variance, the corrected sample standard deviation, is biased. The bias depends both on the sampling distribution of the estimator and on the transform, and can be quite involved to calculate – see unbiased estimation of standard deviation for a discussion in this case.

Bias, variance and mean squared error

edit
?
Sampling distributions of two alternative estimators for a parameter β0. Although β1^ is unbiased, it is clearly inferior to the biased β2^.

Ridge regression is one example of a technique where allowing a little bias may lead to a considerable reduction in variance, and more reliable estimates overall.

While bias quantifies the average difference to be expected between an estimator and an underlying parameter, an estimator based on a finite sample can additionally be expected to differ from the parameter due to the randomness in the sample. An estimator that minimises the bias will not necessarily minimise the mean square error. One measure which is used to try to reflect both types of difference is the mean square error,[2]

?

This can be shown to be equal to the square of the bias, plus the variance:[2]

?

When the parameter is a vector, an analogous decomposition applies:[15]

?

where ? is the trace (diagonal sum) of the covariance matrix of the estimator and ? is the square vector norm.

Example: Estimation of population variance

edit

For example,[16] suppose an estimator of the form

?

is sought for the population variance as above, but this time to minimise the MSE:

?

If the variables X1 ... Xn follow a normal distribution, then nS22 has a chi-squared distribution with n???1 degrees of freedom, giving:

?

and so

?

With a little algebra it can be confirmed that it is c = 1/(n?+?1) which minimises this combined loss function, rather than c = 1/(n???1) which minimises just the square of the bias.

More generally it is only in restricted classes of problems that there will be an estimator that minimises the MSE independently of the parameter values.

However it is very common that there may be perceived to be a bias–variance tradeoff, such that a small increase in bias can be traded for a larger decrease in variance, resulting in a more desirable estimator overall.

Bayesian view

edit

Most bayesians are rather unconcerned about unbiasedness (at least in the formal sampling-theory sense above) of their estimates. For example, Gelman and coauthors (1995) write: "From a Bayesian perspective, the principle of unbiasedness is reasonable in the limit of large samples, but otherwise it is potentially misleading."[17]

Fundamentally, the difference between the Bayesian approach and the sampling-theory approach above is that in the sampling-theory approach the parameter is taken as fixed, and then probability distributions of a statistic are considered, based on the predicted sampling distribution of the data. For a Bayesian, however, it is the data which are known, and fixed, and it is the unknown parameter for which an attempt is made to construct a probability distribution, using Bayes' theorem:

?

Here the second term, the likelihood of the data given the unknown parameter value θ, depends just on the data obtained and the modelling of the data generation process. However a Bayesian calculation also includes the first term, the prior probability for θ, which takes account of everything the analyst may know or suspect about θ before the data comes in. This information plays no part in the sampling-theory approach; indeed any attempt to include it would be considered "bias" away from what was pointed to purely by the data. To the extent that Bayesian calculations include prior information, it is therefore essentially inevitable that their results will not be "unbiased" in sampling theory terms.

But the results of a Bayesian approach can differ from the sampling theory approach even if the Bayesian tries to adopt an "uninformative" prior.

For example, consider again the estimation of an unknown population variance σ2 of a Normal distribution with unknown mean, where it is desired to optimise c in the expected loss function

?

A standard choice of uninformative prior for this problem is the Jeffreys prior, ?, which is equivalent to adopting a rescaling-invariant flat prior for ln(σ2).

One consequence of adopting this prior is that S22 remains a pivotal quantity, i.e. the probability distribution of S22 depends only on S22, independent of the value of S2 or σ2:

?

However, while

?

in contrast

?

— when the expectation is taken over the probability distribution of σ2 given S2, as it is in the Bayesian case, rather than S2 given σ2, one can no longer take σ4 as a constant and factor it out. The consequence of this is that, compared to the sampling-theory calculation, the Bayesian calculation puts more weight on larger values of σ2, properly taking into account (as the sampling-theory calculation cannot) that under this squared-loss function the consequence of underestimating large values of σ2 is more costly in squared-loss terms than that of overestimating small values of σ2.

The worked-out Bayesian calculation gives a scaled inverse chi-squared distribution with n???1 degrees of freedom for the posterior probability distribution of σ2. The expected loss is minimised when cnS2?=?<σ2>; this occurs when c?=?1/(n???3).

Even with an uninformative prior, therefore, a Bayesian calculation may not give the same expected-loss minimising result as the corresponding sampling-theory calculation.

See also

edit

Notes

edit
  1. ^ "For the binomial distribution, why does no unbiased estimator exist for $1/p$?". Mathematics Stack Exchange. Retrieved 2025-08-14.
  2. ^ a b c Kozdron, Michael (March 2016). "Evaluating the Goodness of an Estimator: Bias, Mean-Square Error, Relative Efficiency (Chapter 3)" (PDF). stat.math.uregina.ca. Retrieved 2025-08-14.
  3. ^ a b Taylor, Courtney (January 13, 2019). "Unbiased and Biased Estimators". ThoughtCo. Retrieved 2025-08-14.
  4. ^ Dekking, Michel, ed. (2005). A modern introduction to probability and statistics: understanding why and how. Springer texts in statistics. London [Heidelberg]: Springer. ISBN?978-1-85233-896-1.
  5. ^ Richard Arnold Johnson; Dean W. Wichern (2007). Applied Multivariate Statistical Analysis. Pearson Prentice Hall. ISBN?978-0-13-187715-3. Retrieved 10 August 2012.
  6. ^ Romano, J. P.; Siegel, A. F. (1986). Counterexamples in Probability and Statistics. Monterey, California, USA: Wadsworth & Brooks / Cole. p.?168.
  7. ^ Hardy, M. (1 March 2003). "An Illuminating Counterexample". American Mathematical Monthly. 110 (3): 234–238. arXiv:math/0206006. doi:10.2307/3647938. ISSN?0002-9890. JSTOR?3647938.
  8. ^ Brown (1947), page 583
  9. ^ Lehmann 1951; Birnbaum 1961; Van der Vaart 1961; Pfanzagl 1994
  10. ^ Pfanzagl, Johann (1979). "On optimal median unbiased estimators in the presence of nuisance parameters". The Annals of Statistics. 7 (1): 187–193. doi:10.1214/aos/1176344563.
  11. ^ a b Brown, L. D.; Cohen, Arthur; Strawderman, W. E. (1976). "A Complete Class Theorem for Strict Monotone Likelihood Ratio With Applications". Ann. Statist. 4 (4): 712–722. doi:10.1214/aos/1176343543.
  12. ^ a b c Dodge, Yadolah, ed. (1987). Statistical Data Analysis Based on the L1-Norm and Related Methods. Papers from the First International Conference held at Neuchatel, August 31–September 4, 1987. Amsterdam: North-Holland. ISBN?0-444-70273-3.
  13. ^ Jaynes, E. T. (2007). Probability Theory?: The Logic of Science. Cambridge: Cambridge Univ. Press. p.?172. ISBN?978-0-521-59271-0.
  14. ^ Klebanov, Lev B.; Rachev, Svetlozar T.; Fabozzi, Frank J. (2009). "Loss Functions and the Theory of Unbiased Estimation". Robust and Non-Robust Models in Statistics. New York: Nova Scientific. ISBN?978-1-60741-768-2.
  15. ^ Taboga, Marco (2010). "Lectures on probability theory and mathematical statistics".
  16. ^ DeGroot, Morris H. (1986). Probability and Statistics (2nd?ed.). Addison-Wesley. pp.?414–5. ISBN?0-201-11366-X. But compare it with, for example, the discussion in Casella; Berger (2001). Statistical Inference (2nd?ed.). Duxbury. p.?332. ISBN?0-534-24312-6.
  17. ^ Gelman, A.; et?al. (1995). Bayesian Data Analysis. Chapman and Hall. p.?108. ISBN?0-412-03991-5.

References

edit
  • Brown, George W. "On Small-Sample Estimation." The Annals of Mathematical Statistics, vol. 18, no. 4 (Dec., 1947), pp.?582–585. JSTOR?2236236.
  • Lehmann, E. L. (December 1951). "A General Concept of Unbiasedness". The Annals of Mathematical Statistics. 22 (4): 587–592. doi:10.1214/aoms/1177729549. JSTOR?2236928.
  • Birnbaum, Allan (March 1961). "A Unified Theory of Estimation, I". The Annals of Mathematical Statistics. 32 (1): 112–135. doi:10.1214/aoms/1177705145.
  • Van der Vaart, H. R. (June 1961). "Some Extensions of the Idea of Bias". The Annals of Mathematical Statistics. 32 (2): 436–447. doi:10.1214/aoms/1177705051.
  • Pfanzagl, Johann (1994). Parametric Statistical Theory. Walter de Gruyter.
  • Stuart, Alan; Ord, Keith; Arnold, Steven [F.] (2010). Classical Inference and the Linear Model. Kendall's Advanced Theory of Statistics. Vol.?2A. Wiley. ISBN?978-0-4706-8924-0..
  • Voinov, Vassily [G.]; Nikulin, Mikhail [S.] (1993). Unbiased estimators and their applications. Vol.?1: Univariate case. Dordrect: Kluwer Academic Publishers. ISBN?0-7923-2382-3.
  • Voinov, Vassily [G.]; Nikulin, Mikhail [S.] (1996). Unbiased estimators and their applications. Vol.?2: Multivariate case. Dordrect: Kluwer Academic Publishers. ISBN?0-7923-3939-8.
  • Klebanov, Lev [B.]; Rachev, Svetlozar [T.]; Fabozzi, Frank [J.] (2009). Robust and Non-Robust Models in Statistics. New York: Nova Scientific Publishers. ISBN?978-1-60741-768-2.
edit
温暖如初是什么意思 一厢情愿什么意思 防晒霜和防晒乳有什么区别 巴基斯坦用什么语言 忠厚是什么意思
打喷嚏鼻塞吃什么药 尿道炎挂什么科 辅酶q10什么价格 什么还珠成语 什么的铅笔
豺狼虎豹为什么豺第一 内衣什么品牌最好 脸上长闭口是什么原因导致的 胃反酸吃什么食物好 什么是外阴白斑
人乳头瘤病毒33型阳性是什么意思 一一是什么意思 内膜薄是什么原因 突然出汗是什么原因 卧轨什么意思
治烫伤最好的药膏是什么dayuxmw.com 忠武路演员是什么意思hcv9jop1ns8r.cn 梦见坐牢是什么预兆hcv9jop1ns1r.cn 痛风病人吃什么菜hcv9jop5ns5r.cn 蜂王浆有什么好处hcv8jop5ns2r.cn
为什么叫夺命大乌苏creativexi.com 为什么会被限制高消费hcv9jop7ns3r.cn 失眠去医院挂什么科hcv8jop0ns9r.cn 梦见掉了两颗牙齿是什么意思hcv9jop3ns4r.cn 延迟是什么意思hcv8jop1ns3r.cn
肺部肿瘤吃什么好hcv7jop6ns8r.cn 前列腺增大伴钙化是什么意思hcv9jop6ns9r.cn 胭脂是什么hcv9jop2ns2r.cn 陶渊明是什么朝代的hcv8jop6ns5r.cn 天龙八部是什么朝代creativexi.com
阳春三月是什么生肖hcv7jop9ns6r.cn 医疗美容需要什么资质hcv8jop1ns0r.cn 双子座男生喜欢什么样的女生hcv9jop6ns8r.cn 孔雀女是什么意思hcv8jop7ns9r.cn 背影杀是什么意思hcv7jop6ns7r.cn
百度