谁也不知道下一秒会发生什么| 喇蛄和小龙虾什么区别| 抗磷脂综合征是什么病| 心脏问题挂什么科| 5羟色胺是什么| 格局什么意思| 子宫小有什么影响| 搁浅了是什么意思| 斑驳是什么意思| 鸭子喜欢吃什么食物| 大便有酸味是什么原因| 为什么不能用红笔写名字| 舌头疼吃什么药| 1979是什么年| 什么叫三观不正| 胸部胀痛什么原因| 溶媒是什么| 白带黄绿是什么原因| 莓茶属于什么茶| 妈妈是什么意思呢| 胃胀呕吐是什么原因| 重阳节干什么| 耳鸣脑鸣是什么原因引起的| 人肉什么意思| 咽口水喉咙痛吃什么药| 什么是碱中毒| 空囊是什么原因造成的| 梦见相亲是什么征兆| 正的五行属性是什么| 肺不张是什么意思| 薄荷叶有什么功效| 梦见吃老鼠肉是什么意思啊| 为什么肝最怕吃花生| 药物制剂是干什么的| 什么是碱性水| 肺炎支原体阳性是什么意思| 枪代表什么生肖| 腱鞘炎有什么治疗方法| 7月10号是什么星座| 吃什么补气养血最快| 醋泡脚有什么好处和坏处| 绝对值是什么| 门的单位是什么| 西米是什么字| 经常感冒发烧是什么原因| 贫血三项是指什么检查| 西加一横读什么| 肛门长肉球是什么原因| 姘头是什么意思| yet是什么意思| 玉米须煮水喝有什么好处| 眉尾上方有痣代表什么| 男人吃什么大补| 天天喝奶茶有什么危害| 邵字五行属什么| 睡眠不好什么原因| 贫血吃什么可以补血| 空气湿度是什么意思| 蜂蜜喝了有什么好处| 神经官能症有什么症状表现| 叼是什么意思| 炖牛肉放什么佐料| 孕妇什么时候吃dha效果比较好| 贫血是什么意思| 心脏反流吃什么药| 咳嗽头晕是什么原因| 乙醇是什么| 小柴胡颗粒治什么病| 螃蟹不能和什么水果一起吃| 脸过敏要注意什么| jojo是什么意思| 什么药治咳嗽最好| 早晨起床口干口苦是什么原因| 参军意愿选什么比较好| 什么是天丝面料| 12月14是什么星座| 被迫是什么意思| 体重指数是什么意思| 白发吃什么维生素| 痰多吃什么药好| 乳粉是什么| 柏读什么| 电解液是什么| 钠对人体有什么作用| 梦到吵架是什么意思| 护照拍照穿什么衣服| 眼睛痒用什么滴眼液| 温文尔雅是什么意思| 黑猫警长是什么猫| 说话鼻音重是什么原因| 心脏彩超可以检查什么| 女性肛门瘙痒用什么药| 风生水起是什么意思| 豆芽不能和什么一起吃| 怡什么意思| 早孕是什么意思| 溘然是什么意思| 不睡人的空床放点什么| 制动是什么意思| 什么是邮箱地址应该怎么填写| 唐氏宝宝是什么意思| 白酒都有什么香型| 小米手机最新款是什么型号| 每天半夜两三点醒是什么原因| 尿酸低吃什么| 对冲是什么意思| 查输卵管是否堵塞要做什么检查| 什么微风| 来事吃什么水果好| 肾虚吃什么好| 花肠是母猪的什么部位| rash什么意思| 氯吡格雷治什么病| 什么叫双飞| 胸腔积液吃什么药最有效| 以至于是什么意思| 为什么月经前乳房胀痛| 什么是烤瓷牙| 舍利子到底是什么| 说话鼻音重是什么原因| plano是什么意思| 流鼻血吃什么| 纯字五行属什么| 射进去有什么感觉| 怀孕了什么时候做检查| cfu是什么单位| 哮喘吃什么药管用| 女人喝咖啡有什么好处| 卡介苗为什么会留疤| 硼砂是什么东西| 医保报销需要什么材料| 梦见拔花生是什么预兆| 什么是香港脚| 刘备是一个什么样的人| 肤色是什么颜色| 头晕做什么检查最准确| 慢性活动性胃炎是什么意思| 藤椒是什么| 壑是什么字| 什么可以吃| 红薯什么时候传入中国| 二级警监是什么级别| 遥望是什么意思| 面部痉挛是什么原因引起的| 睡眠模式是什么意思| 咳嗽吐黄痰吃什么药| 男性补肾壮阳吃什么药效果比较好| 肾炎的症状是什么| 短板是什么意思| 小猫发烧有什么症状| 省军区司令员是什么级别| 什么是三宝| 长生殿讲的是什么故事| 胃怕凉怕冷是什么原因| 抗美援朝是什么时候| 看腋臭挂什么科| 缺钾最忌讳吃什么| 做亲子鉴定需要什么材料| 接吻是什么感觉| 四海扬名是什么生肖| 巧克力不能和什么一起吃| 血吸虫是什么动物| 榴莲不能和什么一起吃| 八哥是什么鸟| 男性吃什么可以壮阳| 喝什么水对身体好| 多愁善感什么意思| 养老院靠什么挣钱| 蒙古族信仰什么教| 子宫前位后位有什么区别| 为什么一来月经就拉肚子| 78年属马的是什么命| 肩周炎用什么药好| ad是什么缩写| 小腿浮肿是什么原因女性| 停车坐爱枫林晚中的坐是什么意思| 3人死亡属于什么事故| 白蛋白偏高是什么意思| 中国最高军衔是什么| 什么是贸易顺差| 什么牌子的氨糖最好| VH是什么品牌| 月子餐吃什么| 为什么会得肺炎| 腰椎穿刺是检查什么的| 亟待解决什么意思| c2能开什么车| 乙肝两对半阴性是什么意思| 什么是滑精| 肾动脉狭窄有什么症状| 今天出生的男宝宝取什么名字好| sansay是什么牌子| 梦见山体滑坡是什么意思| 织锦是什么面料| 一本万利是什么生肖| 帝舵手表什么档次| 7月5日是什么星座| 霍家为什么娶郭晶晶| 胀气打嗝是什么原因| 在什么什么后面| 龙是什么意思| 活色生香什么意思| 云是由什么组成的| 淋巴结用什么药效果好| 肠化生是什么症状| 三天打鱼两天晒网什么意思| 武则天墓为什么不敢挖| 嘴唇干燥是什么原因| 胃子老是胀气是什么原因| 什么样的水果| b是什么单位| 睡午觉有什么好处| 县里的局长是什么级别| 肠结核是什么病| 狐臭去医院挂什么科| 小柴胡颗粒治什么病| 吃什么可以拉肚子| 常吃南瓜有什么好处和坏处| 古代上元节是什么节日| 梵音是什么意思| ECG是什么| 心脏彩超ef是什么意思| 狐媚是什么意思| 中性粒细胞百分比高是什么原因| 衣原体阳性是什么病| 智力是什么意思| 月经血块是什么原因| 怀孕了吃什么| 说话不清楚去医院挂什么科| 邓超的老婆叫什么名字| 刘禹锡是什么朝代的| 长期喝什么水可以美白| 更迭是什么意思| 脉沉细是什么意思| 女人怀孕的最佳时间是什么时间| 狒狒是什么动物| 偷什么不犯法| 皮肤黄是什么原因| 肺部散在小结节是什么意思| 足底筋膜炎什么症状| 牛在五行中属什么| 洗衣机什么品牌好| 为盼是什么意思| 三叉神经疼吃什么药| 白癜风用什么药膏| 梦见自己会飞是什么意思| 口臭吃什么| 喉炎是什么原因引起的| 恚是什么意思| 咬指甲是什么心理疾病| 被交警开罚单不交有什么后果| 万象更新是什么生肖| 自投罗网是什么意思| 99年属什么的| 18k是什么金| 什么长什么去| 水痘不能吃什么食物| 去海边玩需要带什么| 肌肉疼是什么原因| 中秋是什么时候| 为什么老是头晕| 什么弓什么箭| 动次打次是什么意思| 黄精和什么煲汤好| 百度Jump to content

《勇者斗恶龙11》即将发售 超半数玩家选择PS4平台

From Wikipedia, the free encyclopedia
百度 在张大千台北住宅的庭院里面,就有这么一个专门用于烧烤的亭子,取名烤亭,专供品尝蒙古烤肉。

In statistics, the likelihood principle is the proposition that, given a statistical model, all the evidence in a sample relevant to model parameters is contained in the likelihood function.

A likelihood function arises from a probability density function considered as a function of its distributional parameterization argument. For example, consider a model which gives the probability density function of observable random variable as a function of a parameter . Then for a specific value of , the function is a likelihood function of : it gives a measure of how "likely" any particular value of is, if we know that has the value . The density function may be a density with respect to counting measure, i.e. a probability mass function.

Two likelihood functions are equivalent if one is a scalar multiple of the other.[a] The likelihood principle is this: All information from the data that is relevant to inferences about the value of the model parameters is in the equivalence class to which the likelihood function belongs. The strong likelihood principle applies this same criterion to cases such as sequential experiments where the sample of data that is available results from applying a stopping rule to the observations earlier in the experiment.[1]

Example

[edit]

Suppose

  • is the number of successes in twelve independent Bernoulli trials with each attempt having probability of success on each trial, and
  • is the number of independent Bernoulli trials needed to get a total of three successes, again each attempt with probability of success on each trial (if it was a fair coin each toss would have of either outcome, heads or tails).

Then the observation that induces the likelihood function

while the observation that induces the likelihood function

The likelihood principle says that, as the data are the same in both cases, the inferences drawn about the value of should also be the same. In addition, all the inferential content in the data about the value of is contained in the two likelihoods, and is the same if they are proportional to one another. This is the case in the above example, reflecting the fact that the difference between observing and observing lies not in the actual data collected, nor in the conduct of the experimenter, but in the two different designs of the experiment.

Specifically, in one case, the decision in advance was to try twelve times, regardless of the outcome; in the other case, the advance decision was to keep trying until three successes were observed. If you support the likelihood principle then inference about should be the same for both cases because the two likelihoods are proportional to each other: Except for a constant leading factor of 220 vs. 55, the two likelihood functions are the same – constant multiples of each other.

This equivalence is not always the case, however. The use of frequentist methods involving p values leads to different inferences for the two cases above,[2] showing that the outcome of frequentist methods depends on the experimental procedure, and thus violates the likelihood principle.

The law of likelihood

[edit]

A related concept is the law of likelihood, the notion that the extent to which the evidence supports one parameter value or hypothesis against another is indicated by the ratio of their likelihoods, their likelihood ratio. That is,

is the degree to which the observation x supports parameter value or hypothesis a against b. If this ratio is 1, the evidence is indifferent; if greater than 1, the evidence supports the value a against b; or if less, then vice versa.

In Bayesian statistics, this ratio is known as the Bayes factor, and Bayes' rule can be seen as the application of the law of likelihood to inference.

In frequentist inference, the likelihood ratio is used in the likelihood-ratio test, but other non-likelihood tests are used as well. The Neyman–Pearson lemma states the likelihood-ratio test is equally statistically powerful as the most powerful test for comparing two simple hypotheses at a given significance level, which gives a frequentist justification for the law of likelihood.

Combining the likelihood principle with the law of likelihood yields the consequence that the parameter value which maximizes the likelihood function is the value which is most strongly supported by the evidence. This is the basis for the widely used method of maximum likelihood.

History

[edit]

The likelihood principle was first identified by that name in print in 1962 (Barnard et al., Birnbaum, and Savage et al.), but arguments for the same principle, unnamed, and the use of the principle in applications goes back to the works of R.A. Fisher in the 1920s. The law of likelihood was identified by that name by I. Hacking (1965). More recently the likelihood principle as a general principle of inference has been championed by A.W.F. Edwards. The likelihood principle has been applied to the philosophy of science by R. Royall.[3]

Birnbaum (1962) initially argued that the likelihood principle follows from two more primitive and seemingly reasonable principles, the conditionality principle and the sufficiency principle:

  • The conditionality principle says that if an experiment is chosen by a random process independent of the states of nature then only the experiment actually performed is relevant to inferences about
  • The sufficiency principle says that if is a sufficient statistic for and if in two experiments with data and we have then the evidence about given by the two experiments is the same.

However, upon further consideration Birnbaum rejected both his conditionality principle and the likelihood principle.[4] The adequacy of Birnbaum's original argument has also been contested by others (see below for details).

Arguments for and against

[edit]

Some widely used methods of conventional statistics, for example many significance tests, are not consistent with the likelihood principle.

Let us briefly consider some of the arguments for and against the likelihood principle.

The original Birnbaum argument

[edit]

According to Giere (1977),[5] Birnbaum rejected[4] both his own conditionality principle and the likelihood principle because they were both incompatible with what he called the “confidence concept of statistical evidence”, which Birnbaum (1970) describes as taking “from the Neyman-Pearson approach techniques for systematically appraising and bounding the probabilities (under respective hypotheses) of seriously misleading interpretations of data” ([4] p. 1033). The confidence concept incorporates only limited aspects of the likelihood concept and only some applications of the conditionality concept. Birnbaum later notes that it was the unqualified equivalence formulation of his 1962 version of the conditionality principle that led “to the monster of the likelihood axiom” ([6] p. 263).

Birnbaum's original argument for the likelihood principle has also been disputed by other statisticians including Akaike,[7] Evans[8] and philosophers of science, including Deborah Mayo.[9][10] Dawid points out fundamental differences between Mayo's and Birnbaum's definitions of the conditionality principle, arguing Birnbaum's argument cannot be so readily dismissed.[11] A new proof of the likelihood principle has been provided by Gandenberger that addresses some of the counterarguments to the original proof.[12]

Experimental design arguments on the likelihood principle

[edit]

Unrealized events play a role in some common statistical methods. For example, the result of a significance test depends on the p-value, the probability of a result as extreme or more extreme than the observation, and that probability may depend on the design of the experiment. To the extent that the likelihood principle is accepted, such methods are therefore denied.

Some classical significance tests are not based on the likelihood. The following are a simple and more complicated example of those, using a commonly cited example called the optional stopping problem.

Example 1 – simple version

Suppose I tell you that I tossed a coin 12 times and in the process observed 3 heads. You might make some inference about the probability of heads and whether the coin was fair.

Suppose now I tell that I tossed the coin until I observed 3 heads, and I tossed it 12 times. Will you now make some different inference?

The likelihood function is the same in both cases: It is proportional to

.

So according to the likelihood principle, in either case the inference should be the same.

Example 2 – a more elaborated version of the same statistics

Suppose a number of scientists are assessing the probability of a certain outcome (which we shall call 'success') in experimental trials. Conventional wisdom suggests that if there is no bias towards success or failure then the success probability would be one half. Adam, a scientist, conducted 12 trials and obtains 3 successes and 9 failures. One of those successes was the 12th and last observation. Then Adam left the lab.

Bill, a colleague in the same lab, continued Adam's work and published Adam's results, along with a significance test. He tested the null hypothesis that p, the success probability, is equal to a half, versus p < 0.5 . If we ignore the information that the third success was the 12th and last observation, the probability of the observed result that out of 12 trials 3 or something fewer (i.e. more extreme) were successes, if H0 is true, is

,

which is ?299/4096? = 7.3% . Thus the null hypothesis is not rejected at the 5% significance level if we ignore the knowledge that the third success was the 12th result.

However observe that this first calculation also includes 12 token long sequences that end in tails contrary to the problem statement!

If we redo this calculation we realize the likelihood according to the null hypothesis must be the probability of a fair coin landing 2 or fewer heads on 11 trials multiplied with the probability of the fair coin landing a head for the 12th trial:

,

which is ?67/2048??1/2? = ?67/4096? = 1.64% . Now the result is statistically significant at the 5% level.

Charlotte, another scientist, reads Bill's paper and writes a letter, saying that it is possible that Adam kept trying until he obtained 3 successes, in which case the probability of needing to conduct 12 or more experiments is given by

,

which is ?134/4096??1/2? = 1.64% . Now the result is statistically significant at the 5% level. Note that there is no contradiction between the latter two correct analyses; both computations are correct, and result in the same p-value.

To these scientists, whether a result is significant or not does not depend on the design of the experiment, but does on the likelihood (in the sense of the likelihood function) of the parameter value being ?1/2? .

Summary of the illustrated issues

Results of this kind are considered by some as arguments against the likelihood principle. For others it exemplifies the value of the likelihood principle and is an argument against significance tests.

Similar themes appear when comparing Fisher's exact test with Pearson's chi-squared test.

The voltmeter story

[edit]

An argument in favor of the likelihood principle is given by Edwards in his book Likelihood. He cites the following story from J.W. Pratt, slightly condensed here. Note that the likelihood function depends only on what actually happened, and not on what could have happened.

An engineer draws a random sample of electron tubes and measures their voltages. The measurements range from 75 to 99 Volts. A statistician computes the sample mean and a confidence interval for the true mean. Later the statistician discovers that the voltmeter reads only as far as 100 Volts, so technically, the population appears to be “censored”. If the statistician is orthodox this necessitates a new analysis.
However, the engineer says he has another meter reading to 1000 Volts, which he would have used if any voltage had been over 100. This is a relief to the statistician, because it means the population was effectively uncensored after all. But later, the statistician discovers that the second meter had not been working when the measurements were taken. The engineer informs the statistician that he would not have held up the original measurements until the second meter was fixed, and the statistician informs him that new measurements are required. The engineer is astounded. “Next you'll be asking about my oscilloscope!
Throwback to Example 2 in the prior section

This story can be translated to Adam's stopping rule above, as follows: Adam stopped immediately after 3 successes, because his boss Bill had instructed him to do so. After the publication of the statistical analysis by Bill, Adam realizes that he has missed a later instruction from Bill to instead conduct 12 trials, and that Bill's paper is based on this second instruction. Adam is very glad that he got his 3 successes after exactly 12 trials, and explains to his friend Charlotte that by coincidence he executed the second instruction. Later, Adam is astonished to hear about Charlotte's letter, explaining that now the result is significant.

See also

[edit]

Notes

[edit]
  1. ^ Geometrically, if they occupy the same point in projective space.

References

[edit]
  1. ^ Dodge, Y. (2003). The Oxford Dictionary of Statistical Terms. Oxford University Press. ISBN 0-19-920613-9.
  2. ^ Vidakovic, Brani. "The Likelihood Principle" (PDF). H. Milton Stewart School of Industrial & Systems Engineering. Georgia Tech. Retrieved 21 October 2017.
  3. ^ Royall, Richard (1997). Statistical Evidence: A likelihood paradigm. Boca Raton, FL: Chapman and Hall. ISBN 0-412-04411-0.
  4. ^ a b c Birnbaum, A. (14 March 1970). "Statistical methods in scientific inference". Nature. 225 (5237): 1033. Bibcode:1970Natur.225.1033B. doi:10.1038/2251033a0.
  5. ^ Giere, R. (1977) Allan Birnbaum's Conception of Statistical Evidence. Synthese, 36, pp.5-13.
  6. ^ Birnbaum, A., (1975) Discussion of J. D. Kalbfleisch's paper 'Sufficiency and Conditionality'. Biometrika, 62, pp. 262-264.
  7. ^ Akaike, H., 1982. On the fallacy of the likelihood principle. Statistics & probability letters, 1(2), pp.75-78]
  8. ^ Evans, Michael (2013). "What does the proof of Birnbaum's theorem prove?". arXiv:1302.5468 [math.ST].
  9. ^ Mayo, D. (2010). "An error in the argument from Conditionality and Sufficiency to the Likelihood Principle". In Mayo, D.; Spanos, A. (eds.). Error and Inference: Recent exchanges on experimental reasoning, reliability and the objectivity and rationality of science (PDF). Cambridge, GB: Cambridge University Press. pp. 305–314.
  10. ^ Mayo, D. (2014). "On the Birnbaum argument for the Strong Likelihood Principle". Statistical Science. 29: 227–266 (with discussion).
  11. ^ Dawid, A.P. (2014). "Discussion of "On the Birnbaum argument for the Strong Likelihood Principle"". Statistical Science. 29 (2): 240–241. arXiv:1411.0807. doi:10.1214/14-STS470. S2CID 55068072.
  12. ^ Gandenberger, Greg (2014). "A new proof of the likelihood principle". British Journal for the Philosophy of Science. 66 (3): 475–503. doi:10.1093/bjps/axt039.

Sources

[edit]
  • Jeffreys, H. (1961). The Theory of Probability. The Oxford University Press.
  • Savage, L.J.; et al. (1962). The Foundations of Statistical Inference. London, UK: Methuen.
[edit]
  • Miller, Jeff. "L". tripod.com. Earliest known uses of some of the words of mathematics.
晚上做梦梦到蛇是什么意思 打牌老是输是什么原因 什么是个性 清秀是什么意思 女人吃当归有什么好处
手足癣用什么药最好 羟丁酸脱氢酶高是什么原因 病原体是什么意思 跌跌撞撞什么意思 第二次世界大战是什么时候
6.20什么星座 动脉血检查是检查什么 啪啪是什么意思 疱疹性咽峡炎吃什么食物 子宫内膜薄是什么原因
月经量突然减少是什么原因 头发五行属什么 忙什么呢幽默的回答 什么时候喝牛奶最好 眩晕症什么症状
感冒黄鼻涕吃什么药hcv9jop2ns0r.cn 耳膜穿孔什么症状hcv8jop5ns3r.cn b币有什么用hcv9jop3ns8r.cn 蕊字五行属什么hcv8jop5ns8r.cn 合寿木是什么意思hcv9jop6ns2r.cn
气血不足吃什么补得快1949doufunao.com 返流性食管炎用什么药hcv8jop7ns9r.cn 艾草泡脚有什么好处hcv8jop3ns9r.cn 磨牙吃什么药能治好hcv7jop5ns6r.cn 小case是什么意思hcv9jop2ns5r.cn
惊悉是什么意思hcv8jop6ns3r.cn 嗔什么意思hcv8jop1ns8r.cn 男孩学什么专业有前途huizhijixie.com 99年属兔的是什么命hcv8jop7ns8r.cn 梦见自己怀孕是什么意思sanhestory.com
什么样的白云hcv9jop3ns7r.cn 胃不好适合吃什么水果hcv9jop5ns5r.cn 大学校长是什么级别hcv8jop0ns3r.cn 什么时候三伏天hcv8jop2ns2r.cn 国防部长是什么级别hcv8jop4ns1r.cn
百度