低血糖是什么引起的| 什么毛什么血| 梦见鹦鹉是什么征兆| 申请低保需要什么条件| 米索前列醇片是什么药| 什么是血浆| 百合病是什么病| 牙齿挂什么科| 外感病是什么意思| 左后背发麻是什么原因| 盐茶是什么茶| 大三阳是什么| 结婚下雨有什么说法| 吹空调喉咙痛什么原因| 减脂喝什么茶最有效| 慢性前列腺炎有什么症状| 兔子的眼睛是什么颜色| 下面痒用什么药| 午睡睡不着是什么原因| 96345是什么电话| 吃什么胎儿眼睛黑又亮| 4.25是什么星座| 左眼皮老是跳是什么原因| 泡脚用什么东西泡最好| 吃什么对心脏最好| 文书是什么| rose是什么意思| 异常心电图是什么意思| 什么药是消炎药| 嗓子干吃什么药| os是什么意思| 贪吃的动物是什么生肖| 塑料袋是什么材质| 贫血缺什么| 埋线是什么意思| 早上吃玉米有什么好处| 羊与什么生肖相合| 感冒头痛吃什么药| 结婚十年是什么婚| 健脾丸和归脾丸有什么区别| 合掌是什么意思| 眼睛干涩有异物感用什么眼药水| 类风湿性关节炎的症状是什么| 三个鱼读什么| 舌头短是什么原因| 鼻子旁边长痘是什么原因| 凝是什么意思| sorona是什么面料| 头晕冒冷汗是什么原因| 晨尿泡沫多是什么原因| 休是什么意思| 女人阴唇发黑是什么原因| 饭局是什么意思| 拉屎为什么是黑色的| 腋下有味道是什么原因| 梦见蝎子是什么预兆| 喝什么水最解渴| 荣辱与共是什么生肖| 三世诸佛是什么意思| 情定三生大结局是什么| 什么叫杵状指| 妇科腺肌症是什么病| 膝盖疼应该挂什么科| 植物油是什么| 羊眼圈是什么| 因加一笔是什么字| 微博id是什么| b型血为什么叫贵族血| 大红袍是什么茶类| 湿气重有什么症状| 皇家礼炮是什么酒| 女人吃什么疏肝理气| 月亮星座是什么| 身上痒是什么原因引起的| 一个合一个页读什么| 如花是什么意思| 社保缴纳基数是什么意思| 便秘吃什么可以调理| 舌苔白腻吃什么药| 中成药是什么意思| 胃溃疡是什么症状| 舌头黄是什么原因| 牙龈充血是什么原因| 1941属什么生肖| 黄鼠狼的天敌是什么动物| 筠字五行属什么| 用盐水洗脸有什么好处和坏处| 中国中铁是做什么的| 永加日念什么| 羊眼圈是什么| 血色病是什么病| 绿茶不能和什么一起吃| 疼痛科主要看什么病| 指甲竖纹是什么原因| 加味逍遥丸和逍遥丸有什么区别| 什么是编外人员| 单的姓氏读音是什么| 阴囊潮湿吃什么| 1976年五行属什么| 尿囊素是什么| 河南什么烟出名| mlf是什么意思| 剪切是什么意思| 玫瑰茄是什么| 看十全十美是什么生肖| 一国两制是什么时候提出的| 平均红细胞体积偏高说明什么| 低血压高什么原因| 那天离开你是什么歌| 西游记是什么朝代| 跖疣是什么原因造成的| 人情味是什么意思| 海鲜不能和什么食物一起吃| 内分泌失调吃什么食物好| 没有是什么意思| 波美度是什么意思| 喝中药不能吃什么东西| 软肋骨炎吃什么药对症| 来例假肚子疼吃什么药| 1933年属什么生肖| beams是什么品牌| 车厘子什么季节吃| 噤口痢是什么意思| 什么欲滴| 小腹胀是什么原因女性| 心肌供血不足是什么原因造成的| 男的尿血是什么原因| 渎神是什么意思| 高铁为什么没有e座| 早孕反应最早什么时候出现| 芹菜煮水喝有什么功效| 肠道湿热吃什么药| 北加田加共是什么字| 什么时候教师节| 眼肿是什么原因引起的| 亚麻是什么面料| 脚趾缝痒溃烂用什么药| bhp是什么单位| 呼吸困难是什么原因| 公历年份是什么意思| 红细胞是什么| 脉紧是什么意思| 任性妄为是什么意思| 奥运五环绿色代表什么| 维生素e和维生素c一起吃有什么效果| 什么是节气| 梦见空棺材是什么意思| 50元人民币什么时候发行的| 查血糖挂什么科| 不但而且是什么关系| 男人胡子长得快是什么原因| 夏天可以玩什么| 梦见玉米是什么意思| 风采依旧是什么意思| 鱼油是什么| 喝栀子茶有什么好处| 什么药物过量会致死| 皮肤黑穿什么颜色好看| 累觉不爱是什么意思| 羊水多对胎儿有什么影响| t2是什么意思| 鼻塞用什么药| 老抽和生抽有什么区别| 上火是什么意思| 玉镯子断了有什么预兆| 为什么会孕酮低| 什么鸡最好吃| 椎体终板炎是什么病| 有待提高是什么意思| 珩五行属什么| 今天有什么新闻| 号什么意思| 鼻炎挂什么科| 吃葡萄干对身体有什么好处| 哈萨克斯坦是什么人种| 脚疼是什么原因| 尿ph值是什么| 小龙虾不能和什么一起吃| 皮肤暗黄是什么原因造成的| 反将一军什么意思| 度是什么意思| 待我长发及腰时下一句是什么| 百事可乐和可口可乐有什么区别| 马齿苋治什么病| mens是什么意思| 神经外科是看什么病的| 血热皮肤瘙痒吃什么药| 倒反天罡是什么意思| 手麻吃什么药| 龙吃什么食物| 不懂事是什么意思| 中国中铁是做什么的| 九月九日是什么节日| 实质性结节是什么意思| 小排畸是什么检查| 立是什么意思| 睡觉做噩梦是什么原因| ss是什么意思| 脾疼是什么原因| 榴莲什么味道| 痰中带血吃什么药| 属鸡什么命| 香蕉不能和什么水果一起吃| hcy是什么检查项目| 无锡机场叫什么名字| 红虾是什么虾| 天条是什么意思| 气虚血虚吃什么中成药| 胎毒是什么| 腹水是什么原因引起的| 什么的芦花| 乳腺结节什么症状表现| 风寒感冒咳嗽吃什么药| 欲加之罪何患无辞是什么意思| 万力什么字| 为什么肝区隐隐作痛| 黑加仑是什么| 算了吧什么意思| 为什么总是头晕| 皮蛋吃了有什么好处和坏处| 再生聚酯纤维是什么面料| 自字五行属什么| 胃有息肉的症状是什么| 吃什么不便秘可以通便| 虎视眈眈是什么意思| 什么地什么| 蓝莓和什么不能一起吃| 心脏早搏是什么原因造成的| 皂矾是什么| tf口红是什么牌子| 尿痛什么原因引起的| 龙井茶什么季节喝最好| 最好的大学是什么大学| 眉毛旁边长痘痘是什么原因| 血压偏高喝什么茶| 什么品牌的帽子好| 合肥古代叫什么| 吃火龙果有什么好处和坏处| 白头翁代表什么生肖| 外公的妹妹叫什么| 裹粉是什么粉| 千山暮雪结局是什么| 纯化水是什么水| 雨中即景什么意思| 摸底是什么意思| 无性别是什么意思| 梦到牙齿掉了是什么意思| 颈椎曲度变直是什么意思| 肌肉萎缩吃什么药| 007最新一部叫什么| 93年属相是什么| 空腹吃西红柿有什么危害| 令坦是对方什么人的尊称| 黄埔军校现在叫什么| 两个立念什么| 村支部书记是什么级别| 来姨妈吃什么水果好| 葡萄打什么药| 鹅蛋有什么营养| 梦见别人盖房子是什么预兆| 气性大是什么意思| 石榴代表什么生肖| 灵芝与什么相克| 什么是浪漫主义| 百度

【流魔王卡震撼上市】6G全国流量仅需19元月

百度   【同期】(中央财经领导小组办公室副主任杨伟民)  产业政策要准不是要回到计划经济,也不是说要回到过去的那种产业政策,不是政府要替代企业决策和选择产业,主要是指明大的结构性的方向,比如说实体经济和虚拟经济的关系,制造业和服务业的关系,存量和增量的关系,传统产业和新兴产业的关系,住房制度当中购房和租房的关系等等。

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.[1] For example, the sample mean is a commonly used estimator of the population mean.

There are point and interval estimators. The point estimators yield single-valued results. This is in contrast to an interval estimator, where the result would be a range of plausible values. "Single value" does not necessarily mean "single number", but includes vector valued or function valued estimators.

Estimation theory is concerned with the properties of estimators; that is, with defining properties that can be used to compare different estimators (different rules for creating estimates) for the same quantity, based on the same data. Such properties can be used to determine the best rules to use under given circumstances. However, in robust statistics, statistical theory goes on to consider the balance between having good properties, if tightly defined assumptions hold, and having worse properties that hold under wider conditions.

Background

edit

An "estimator" or "point estimate" is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model. A common way of phrasing it is "the estimator is the method selected to obtain an estimate of an unknown parameter". The parameter being estimated is sometimes called the estimand. It can be either finite-dimensional (in parametric and semi-parametric models), or infinite-dimensional (semi-parametric and non-parametric models).[2] If the parameter is denoted ? then the estimator is traditionally written by adding a circumflex over the symbol: ?. Being a function of the data, the estimator is itself a random variable; a particular realization of this random variable is called the "estimate". Sometimes the words "estimator" and "estimate" are used interchangeably.

The definition places virtually no restrictions on which functions of the data can be called the "estimators". The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc. The construction and comparison of estimators are the subjects of the estimation theory. In the context of decision theory, an estimator is a type of decision rule, and its performance may be evaluated through the use of loss functions.

When the word "estimator" is used without a qualifier, it usually refers to point estimation. The estimate in this case is a single point in the parameter space. There also exists another type of estimator: interval estimators, where the estimates are subsets of the parameter space.

The problem of density estimation arises in two applications. Firstly, in estimating the probability density functions of random variables and secondly in estimating the spectral density function of a time series. In these problems the estimates are functions that can be thought of as point estimates in an infinite dimensional space, and there are corresponding interval estimation problems.

Definition

edit

Suppose a fixed parameter ? needs to be estimated. Then an "estimator" is a function that maps the sample space to a set of sample estimates. An estimator of ? is usually denoted by the symbol ?. It is often convenient to express the theory using the algebra of random variables: thus if X is used to denote a random variable corresponding to the observed data, the estimator (itself treated as a random variable) is symbolised as a function of that random variable, ?. The estimate for a particular observed data value ? (i.e. for ?) is then ?, which is a fixed value. Often an abbreviated notation is used in which ? is interpreted directly as a random variable, but this can cause confusion.

Quantified properties

edit

The following definitions and attributes are relevant.[3]

Error

edit

For a given sample ?, the "error" of the estimator ? is defined as

?

where ? is the parameter being estimated. The error, e, depends not only on the estimator (the estimation formula or procedure), but also on the sample.

Mean squared error

edit

The mean squared error of ? is defined as the expected value (probability-weighted average, over all samples) of the squared errors; that is,

?

It is used to indicate how far, on average, the collection of estimates are from the single parameter being estimated. Consider the following analogy. Suppose the parameter is the bull's-eye of a target, the estimator is the process of shooting arrows at the target, and the individual arrows are estimates (samples). Then high MSE means the average distance of the arrows from the bull's eye is high, and low MSE means the average distance from the bull's eye is low. The arrows may or may not be clustered. For example, even if all arrows hit the same point, yet grossly miss the target, the MSE is still relatively large. However, if the MSE is relatively low then the arrows are likely more highly clustered (than highly dispersed) around the target.

Sampling deviation

edit

For a given sample ?, the sampling deviation of the estimator ? is defined as

?

where ? is the expected value of the estimator. The sampling deviation, d, depends not only on the estimator, but also on the sample.

Variance

edit

The variance of ? is the expected value of the squared sampling deviations; that is, ?. It is used to indicate how far, on average, the collection of estimates are from the expected value of the estimates. (Note the difference between MSE and variance.) If the parameter is the bull's-eye of a target, and the arrows are estimates, then a relatively high variance means the arrows are dispersed, and a relatively low variance means the arrows are clustered. Even if the variance is low, the cluster of arrows may still be far off-target, and even if the variance is high, the diffuse collection of arrows may still be unbiased. Finally, even if all arrows grossly miss the target, if they nevertheless all hit the same point, the variance is zero.

Bias

edit

The bias of ? is defined as ?. It is the distance between the average of the collection of estimates, and the single parameter being estimated. The bias of ? is a function of the true value of ? so saying that the bias of ? is ? means that for every ? the bias of ? is ?.

There are two kinds of estimators: biased estimators and unbiased estimators. Whether an estimator is biased or not can be identified by the relationship between ? and 0:

  • If ?, ? is biased.
  • If ?, ? is unbiased.

The bias is also the expected value of the error, since ?. If the parameter is the bull's eye of a target and the arrows are estimates, then a relatively high absolute value for the bias means the average position of the arrows is off-target, and a relatively low absolute bias means the average position of the arrows is on target. They may be dispersed, or may be clustered. The relationship between bias and variance is analogous to the relationship between accuracy and precision.

The estimator ? is an unbiased estimator of ? if and only if ?. Bias is a property of the estimator, not of the estimate. Often, people refer to a "biased estimate" or an "unbiased estimate", but they really are talking about an "estimate from a biased estimator", or an "estimate from an unbiased estimator". Also, people often confuse the "error" of a single estimate with the "bias" of an estimator. That the error for one estimate is large, does not mean the estimator is biased. In fact, even if all estimates have astronomical absolute values for their errors, if the expected value of the error is zero, the estimator is unbiased. Also, an estimator's being biased does not preclude the error of an estimate from being zero in a particular instance. The ideal situation is to have an unbiased estimator with low variance, and also try to limit the number of samples where the error is extreme (that is, to have few outliers). Yet unbiasedness is not essential. Often, if just a little bias is permitted, then an estimator can be found with lower mean squared error and/or fewer outlier sample estimates.

An alternative to the version of "unbiased" above, is "median-unbiased", where the median of the distribution of estimates agrees with the true value; thus, in the long run half the estimates will be too low and half too high. While this applies immediately only to scalar-valued estimators, it can be extended to any measure of central tendency of a distribution: see median-unbiased estimators.

In a practical problem, ? can always have functional relationship with ?. For example, if a genetic theory states there is a type of leaf (starchy green) that occurs with probability ?, with ?. Then, for ? leaves, the random variable ?, or the number of starchy green leaves, can be modeled with a ? distribution. The number can be used to express the following estimator for ?: ?. One can show that ? is an unbiased estimator for ?: ? ? ? ? ? ? ?.

Unbiased

edit
?
Difference between estimators: an unbiased estimator ? is centered around ? vs. a biased estimator ?.

A desired property for estimators is the unbiased trait where an estimator is shown to have no systematic tendency to produce estimates larger or smaller than the true parameter. Additionally, unbiased estimators with smaller variances are preferred over larger variances because it will be closer to the "true" value of the parameter. The unbiased estimator with the smallest variance is known as the minimum-variance unbiased estimator (MVUE).

To find if your estimator is unbiased it is easy to follow along the equation ?, ?. With estimator T with and parameter of interest ? solving the previous equation so it is shown as ? the estimator is unbiased. Looking at the figure to the right despite ? being the only unbiased estimator, if the distributions overlapped and were both centered around ? then distribution ? would actually be the preferred unbiased estimator.

Expectation When looking at quantities in the interest of expectation for the model distribution there is an unbiased estimator which should satisfy the two equations below.

?
?

Variance Similarly, when looking at quantities in the interest of variance as the model distribution there is also an unbiased estimator that should satisfy the two equations below.

?
?

Note we are dividing by n???1 because if we divided with n we would obtain an estimator with a negative bias which would thus produce estimates that are too small for ?. It should also be mentioned that even though ? is unbiased for ? the reverse is not true.[4]

Relationships among the quantities

edit
  • The mean squared error, variance, and bias, are related: ? i.e. mean squared error = variance + square of bias. In particular, for an unbiased estimator, the variance equals the mean squared error.
  • The standard deviation of an estimator ? of ? (the square root of the variance), or an estimate of the standard deviation of an estimator ? of ?, is called the standard error of ?.
  • The bias-variance tradeoff will be used in model complexity, over-fitting and under-fitting. It is mainly used in the field of supervised learning and predictive modelling to diagnose the performance of algorithms.

Example

edit

Consider a random variable following a normal probability distribution ?, and a biased estimator of the mean ? of that distribution

?,

where ? follows a degenerate distribution, i.e. ?, such that

?,
?,
?

where all the terms are zero except ? using the Bienaymé formula, and

?.

We verify the relation between the mean square error, the variance and the bias. Below are illustrated the quantified properties of the estimation of the probability distribution mean, taking ?, ? and ?.

Probability density function ? of the standard normal distribution (blue) with a sample ? of ? values (?) and the associated estimate ? (?). The mean ? of the original distribution and the mean ? of the estimator (which is the mean of its sampling distribution, see right picture) are also shown, along with the error ? and sampling deviation ?.
Sampling distribution of the estimator ? with mean, variance (square of the standard error), and mean square error. In red is the exact distribution that is known in the case where the original distribution is normal and the estimator is the (biased) sample mean. In green is shown the histogram of 20000 estimates. The histogram converges to the exact distribution in the limit of infinite samples. Note that for a given number of estimates, the central limit theorem ensures that the estimate distribution of the (biased) sample mean estimator also converges to the exact distribution in the limit of infinite sample size.


Behavioral properties

edit

Consistency

edit

A consistent estimator is an estimator whose sequence of estimates converge in probability to the quantity being estimated as the index (usually the sample size) grows without bound. In other words, increasing the sample size increases the probability of the estimator being close to the population parameter.

Mathematically, an estimator is a consistent estimator for parameter θ, if and only if for the sequence of estimates {tn; n ≥ 0}, and for all ε > 0, no matter how small, we have

?.

The consistency defined above may be called weak consistency. The sequence is strongly consistent, if it converges almost surely to the true value.

An estimator that converges to a multiple of a parameter can be made into a consistent estimator by multiplying the estimator by a scale factor, namely the true value divided by the asymptotic value of the estimator. This occurs frequently in estimation of scale parameters by measures of statistical dispersion.

Fisher consistency

edit

An estimator can be considered Fisher consistent as long as the estimator is the same functional of the empirical distribution function as the true distribution function. Following the formula:

?

Where ? and ? are the empirical distribution function and theoretical distribution function, respectively. An easy example to see if some estimator is Fisher consistent is to check the consistency of mean and variance. For example, to check consistency for the mean ? and to check for variance confirm that ?.[5]

Asymptotic normality

edit

An asymptotically normal estimator is a consistent estimator whose distribution around the true parameter θ approaches a normal distribution with standard deviation shrinking in proportion to ? as the sample size n grows. Using ? to denote convergence in distribution, tn is asymptotically normal if

?

for some V.

In this formulation V/n can be called the asymptotic variance of the estimator. However, some authors also call V the asymptotic variance. Note that convergence will not necessarily have occurred for any finite "n", therefore this value is only an approximation to the true variance of the estimator, while in the limit the asymptotic variance (V/n) is simply zero. To be more specific, the distribution of the estimator tn converges weakly to a dirac delta function centered at ?.

The central limit theorem implies asymptotic normality of the sample mean ? as an estimator of the true mean. More generally, maximum likelihood estimators are asymptotically normal under fairly weak regularity conditions — see the asymptotics section of the maximum likelihood article. However, not all estimators are asymptotically normal; the simplest examples are found when the true value of a parameter lies on the boundary of the allowable parameter region.

Efficiency

edit

The efficiency of an estimator is used to estimate the quantity of interest in a "minimum error" manner. In reality, there is not an explicit best estimator; there can only be a better estimator. Whether the efficiency of an estimator is better or not is based on the choice of a particular loss function, and it is reflected by two naturally desirable properties of estimators: to be unbiased ? and have minimal mean squared error (MSE) ?. These cannot in general both be satisfied simultaneously: a biased estimator may have a lower mean squared error than any unbiased estimator (see estimator bias). This equation relates the mean squared error with the estimator bias:[4]

?

The first term represents the mean squared error; the second term represents the square of the estimator bias; and the third term represents the variance of the estimator. The quality of the estimator can be identified from the comparison between the variance, the square of the estimator bias, or the MSE. The variance of the good estimator (good efficiency) would be smaller than the variance of the bad estimator (bad efficiency). The square of an estimator bias with a good estimator would be smaller than the estimator bias with a bad estimator. The MSE of a good estimator would be smaller than the MSE of the bad estimator. Suppose there are two estimator, ? is the good estimator and ? is the bad estimator. The above relationship can be expressed by the following formulas.

?
?
?

Besides using formula to identify the efficiency of the estimator, it can also be identified through the graph. If an estimator is efficient, in the frequency vs. value graph, there will be a curve with high frequency at the center and low frequency on the two sides. For example:

?

If an estimator is not efficient, the frequency vs. value graph, there will be a relatively more gentle curve.

?

To put it simply, the good estimator has a narrow curve, while the bad estimator has a large curve. Plotting these two curves on one graph with a shared y-axis, the difference becomes more obvious.

?
Comparison between good and bad estimator

Among unbiased estimators, there often exists one with the lowest variance, called the minimum variance unbiased estimator (MVUE). In some cases an unbiased efficient estimator exists, which, in addition to having the lowest variance among unbiased estimators, satisfies the Cramér–Rao bound, which is an absolute lower bound on variance for statistics of a variable.

Concerning such "best unbiased estimators", see also Cramér–Rao bound, Gauss–Markov theorem, Lehmann–Scheffé theorem, Rao–Blackwell theorem.

Robustness

edit

See also

edit

References

edit
  1. ^ Mosteller, F.; Tukey, J. W. (1987) [1968]. "Data Analysis, including Statistics". The Collected Works of John W. Tukey: Philosophy and Principles of Data Analysis 1965–1986. Vol.?4. CRC Press. pp.?601–720 [p. 633]. ISBN?0-534-05101-4 – via Google Books.
  2. ^ Kosorok (2008), Section 3.1, pp 35–39.
  3. ^ Jaynes (2007), p.172.
  4. ^ a b Dekking, Frederik Michel; Kraaikamp, Cornelis; Lopuha?, Hendrik Paul; Meester, Ludolf Erwin (2005). A Modern Introduction to Probability and Statistics. Springer Texts in Statistics. ISBN?978-1-85233-896-1.
  5. ^ Lauritzen, Steffen. "Properties of Estimators" (PDF). University of Oxford. Retrieved 9 December 2023.

Further reading

edit
edit
月亮杯是什么东西 hgb是什么意思 套作是什么意思 做梦梦见棺材和死人是什么意思 一个人自言自语的说话是什么病
为什么困但是睡不着 推特是什么意思 种猪是什么意思 为什么会得艾滋病 叶酸对人体有什么好处
经常恶心干呕是什么原因 打一个喷嚏代表什么 火高念什么 h皮带是什么牌子 脖子粗大是什么原因
朱顶红什么时候剪叶子 扁桃体发炎吃什么水果 喝酒后头疼是什么原因 胡人是什么民族 嘴巴经常长溃疡是什么原因
溯溪是什么意思hcv9jop4ns8r.cn 兔子的耳朵像什么hcv8jop6ns3r.cn 梦见新坟墓是什么预兆hcv8jop6ns6r.cn 连铁是什么器官hcv9jop6ns4r.cn 什么肠什么肚hcv9jop6ns9r.cn
盲约大结局是什么hcv9jop6ns6r.cn 辟邪剑谱和葵花宝典有什么关系dayuxmw.com 水样便腹泻是什么引起hcv9jop0ns4r.cn 女人喝蛇汤有什么好处hcv8jop1ns4r.cn 牛肉炒什么菜hcv8jop8ns8r.cn
hb是什么hcv8jop8ns3r.cn 燃脂是什么意思hcv8jop7ns1r.cn 反乌托邦是什么意思hcv8jop5ns1r.cn 八月生日什么星座hcv8jop0ns1r.cn 处女座前面是什么星座hcv8jop7ns8r.cn
车前草有什么功效gysmod.com 急的什么aiwuzhiyu.com cc是什么hcv8jop5ns0r.cn 歧视什么意思hcv8jop8ns0r.cn 什么时候初伏第一天hcv9jop4ns0r.cn
百度