窒息是什么意思| b超检查前要注意什么| 暴毙是什么意思| 一月10号是什么星座| 突厥是现在的什么地方| 腿脚发麻是什么原因| 吃了安宫牛黄丸要禁忌什么不能吃| 伴侣是什么意思| 长春有什么特产| sop是什么意思| 为什么大姨妈迟迟不来| 背痛去医院挂什么科| 5月7日是什么星座| 强直性脊柱炎是什么| 阴虱病是什么原因引起的| 介质是什么| 颈椎病用什么枕头最好| 降钙素是查什么的| 梦见来例假是什么预兆| 什么时候放开二胎| 彼岸花什么时候开花| 月亮杯是什么东西| 怀孕什么症状| 膝盖骨质增生用什么药效果好| biu是什么意思| 牛黄安宫丸什么季节吃| o是什么| 祭日是什么意思| 为什么有的女人欲太强| 血糖高适合吃什么食物| 气短吃什么药| 以什么| 张飞穿针的歇后语是什么| 月经量少要吃什么调理| 尿道灼热感吃什么药| 尿糖弱阳性是什么意思| 何炅的老婆叫什么名字| 永加一个日念什么| 仙人掌煎鸡蛋治什么病| 快闪店是什么意思| a血型和o血型生出宝宝是什么血型| 子宫内膜回声欠均匀什么意思| 阴湿是什么意思| 什么私语| 乳酸高是什么原因| 鬼针草长什么样| 梦见已故的老人是什么意思| 慢性活动性胃炎是什么意思| 借刀杀人是什么生肖| 什么是软饮料| 子宫腺肌症吃什么药最有效| 双子座的幸运花是什么| 子宫囊肿是什么原因引起的| 立夏节吃什么| bpm是什么| barry什么意思| 朱的部首是什么| 忌口是什么意思| 175是什么尺码| 自学成才是什么意思| 蛀牙是什么原因引起的| 狗狗窝咳吃什么药最好| 胆结石吃什么排石最快| 钙盐沉积是什么意思| 室早三联律是什么意思| 黑鱼又叫什么鱼| 四季平安是什么生肖| 什么人容易得尿毒症| 保家仙都有什么仙| 阴道发热是什么原因| 念珠菌用什么药最好| 康复新液是什么做的| 什么人| 人彘为什么还能活着| 白细胞和淋巴细胞偏高是什么原因| 先兆流产是什么意思| 357是什么意思| bl小说是什么意思| 胃ca是什么意思| 抗战纪念日为什么是9月3日| 中将是什么级别的干部| 沦落什么意思| 不良人是什么| 投桃报李是什么生肖| 配送是什么意思| 福尔马林是什么味道| 万艾可是什么药| 头发干枯毛躁用什么洗发水| 吉兆什么意思| 眼结石是什么原因引起的| 大人睡觉流口水是什么原因引起的| 涵字取名的寓意是什么| 爱叶有什么作用和功效| 梦到生女儿是什么意思| ricu病房是什么意思| mys是什么意思| dx什么意思| 阴道菌群失调用什么药| 吃虾不能吃什么水果| 收敛是什么意思| 霉菌阴道炎是什么引起的| 凝神是什么意思| 尿有泡沫是什么原因| 独宠是什么意思| ms.是什么意思| gala是什么意思| 什么是豹子号| 感统训练是什么| 男性内分泌科检查什么| utc是什么| cg是什么意思| 拘留所和看守所有什么区别| 张少华什么时候去世的| 吃什么会导致流产| 血小板计数偏高是什么原因| 外阴瘙痒抹什么药| 牛皮革是什么意思| 教义是什么意思| 乌鸡不能和什么一起吃| 高密度脂蛋白胆固醇偏高是什么意思| 尿红细胞阳性什么意思| 藏红花泡水喝有什么功效和作用| 十月十三是什么星座| 水瓶座前面是什么星座| 暗示是什么意思| 欣慰的意思是什么| 软开是什么| 类风湿性关节炎用什么药| 神经外科主要看什么病| 五行属金什么字最好| 88年出生属什么生肖| sakose是什么牌子| 随心而欲是什么意思| 啵啵是什么意思| 抽烟是什么感觉| 半夜胃反酸水是什么原因| 拉开帷幕是什么意思| 月经期肚子疼是什么原因| 包饺子什么意思| 做脑ct挂什么科| 菟丝子是什么| 回迁是什么意思| 运是什么结构| 热得直什么| 老爹是什么意思| 命格是什么意思| 咀嚼什么意思| 为什么牙齿会松动| 1983年是什么年| id锁是什么| 奇妙是什么意思| 红艳艳的什么| 排便困难是什么原因| b长什么样| 肠套叠是什么意思| 诺如病毒通过什么传染| 先算什么再算什么| 疮疖是什么样子图片| 新店开业送什么好| 什么是奶昔| 幽门杆菌吃什么药| 血红蛋白什么意思| 关节响是什么原因| 用红笔写名字代表什么| 嘴巴臭是什么原因| 穿什么颜色衣服显白| 大学挂科是什么意思| 白羊座是什么性格| 什么叫缘分| 中间细胞百分比偏高是什么意思| 城堡是什么意思| 口腔溃疡喝什么| 牙齿痒是什么原因| 腋窝爱出汗是什么原因| 买什么意思| 偏光太阳镜是什么意思| 不思量 自难忘什么意思| 夜里12点是什么时辰| 梦见下雨是什么预兆| 吃什么可以帮助睡眠| 盐和小苏打一起有什么作用| 安宫丸什么时候吃效果是最佳的| 有出息是什么意思| 为什么姓张的不用说免贵| 干涸是什么意思| 子宫内膜双层什么意思| 鱼眼睛吃了有什么好处| 梦见白菜是什么预兆| 男性尿频尿急吃什么药| 黄历修造是什么意思| 吃猪血有什么好处和坏处| 冷的什么| 逍遥丸有什么作用| 欧莱雅适合什么年龄| 六月十号什么星座| 酸是什么| 支原体是什么| 紫癜是一种什么病严重吗| 玉露茶属于什么茶| 一什么秧苗| 虾虎鱼吃什么| 后背疼是什么原因引起的| 定坤丹适合什么人吃| 小人得志是什么意思| 女性内分泌失调吃什么药| 血糖高喝什么饮料好| 基酒是什么意思| 6月19日什么星座| 勾魂是什么意思| 吃黑木耳有什么好处| 晕车药叫什么名字| 9月15号是什么星座| 余事勿取 什么意思| 手会发抖是什么原因| 吃什么血脂降的最快| 何炅的老婆叫什么名字| 中筋面粉是什么粉| 榴莲什么季节吃最好| 破屋坏垣适合干什么| 自勉是什么意思| 夏天吃姜有什么好处| 台湾三小是什么意思| 2月10日什么星座| 耳鸣吃什么药比较好| 苏子是什么| 乙肝核心抗体高是什么意思| 亚麻色是什么颜色| 承认是什么意思| 什么能养肝| 境遇是什么意思| 法西斯战争是什么意思| 沣字五行属什么| 2029年属什么生肖| 地头蛇比喻什么样的人| 两个人背靠背是什么牌子| 业力是什么意思| 孕妇梦见鱼是什么意思| 冰淇淋是什么做的| 口干嗓子干是什么原因| 出汗特别多是什么原因| 小候鸟是什么意思| 驾驶证体检挂什么科| 竹节棉是什么面料| 胃不好喝什么茶| 子宫多发肌瘤是什么意思| mf是什么意思| 梦见桥塌了有什么预兆| 铁皮石斛能治什么病| 澳大利亚人说什么语言| 穿什么好呢| 井泉水命什么意思| 手机为什么会发热| 阴虚吃什么药效果最好| dfi是什么意思| 梦见老公回来了是什么征兆| doms是什么意思| 两个马念什么字| 九月28号是什么星座| 飞机杯什么意思| 中午吃什么减肥| 什么的梨花| 以至于是什么意思| 来月经是黑色的是什么原因| 今年25岁属什么生肖的| 百度Jump to content

时尚潮流、时尚个性、妩媚女人女装GHYCI吉曦

From Wikipedia, the free encyclopedia
百度 同时,也说明日本自卫队对“心神”不满意的地方太多,包括发展理念、隐身性能、发动机、智能蒙皮等离想象的差距太大,再试验下去等于白白烧钱,还不如及时止损节约经费,以发展更先进的战斗机。

Meta-learning[1][2] is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.[1]

Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias.[3] This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood.

By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta-learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Jürgen Schmidhuber's early work (1987)[1] and Yoshua Bengio et al.'s work (1991),[4] considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain. In an open-ended hierarchical meta-learning system[1] using genetic programming, better evolutionary methods can be learned by meta evolution, which itself can be improved by meta meta evolution, etc.[1]

Definition

[edit]

A proposed definition[5] for a meta-learning system combines three requirements:

  • The system must include a learning subsystem.
  • Experience is gained by exploiting meta knowledge extracted
    • in a previous learning episode on a single dataset, or
    • from different domains.
  • Learning bias must be chosen dynamically.

Bias refers to the assumptions that influence the choice of explanatory hypotheses[6] and not the notion of bias represented in the bias-variance dilemma. Meta-learning is concerned with two aspects of learning bias.

  • Declarative bias specifies the representation of the space of hypotheses, and affects the size of the search space (e.g., represent hypotheses using linear functions only).
  • Procedural bias imposes constraints on the ordering of the inductive hypotheses (e.g., preferring smaller hypotheses).[7]

Common approaches

[edit]

There are three common approaches:[8]

  1. using (cyclic) networks with external or internal memory (model-based)
  2. learning effective distance metrics (metrics-based)
  3. explicitly optimizing model parameters for fast learning (optimization-based).

Model-Based

[edit]

Model-based meta-learning models updates its parameters rapidly with a few training steps, which can be achieved by its internal architecture or controlled by another meta-learner model.[8]

Memory-Augmented Neural Networks

[edit]

A Memory-Augmented Neural Network, or MANN for short, is claimed to be able to encode new information quickly and thus to adapt to new tasks after only a few examples.[9]

Meta Networks

[edit]

Meta Networks (MetaNet) learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.[10]

Metric-Based

[edit]

The core idea in metric-based meta-learning is similar to nearest neighbors algorithms, which weight is generated by a kernel function. It aims to learn a metric or distance function over objects. The notion of a good metric is problem-dependent. It should represent the relationship between inputs in the task space and facilitate problem solving.[8]

Convolutional Siamese Neural Network

[edit]

Siamese neural network is composed of two twin networks whose output is jointly trained. There is a function above to learn the relationship between input data sample pairs. The two networks are the same, sharing the same weight and network parameters.[11]

Matching Networks

[edit]

Matching Networks learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.[12]

Relation Network

[edit]

The Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting.[13]

Prototypical Networks

[edit]

Prototypical Networks learn a metric space in which classification can be performed by computing distances to prototype representations of each class. Compared to recent approaches for few-shot learning, they reflect a simpler inductive bias that is beneficial in this limited-data regime, and achieve satisfied results.[14]

Optimization-Based

[edit]

What optimization-based meta-learning algorithms intend for is to adjust the optimization algorithm so that the model can be good at learning with a few examples.[8]

LSTM Meta-Learner

[edit]

LSTM-based meta-learner is to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The parametrization allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training.[15]

Temporal Discreteness

[edit]

Model-Agnostic Meta-Learning (MAML) is a fairly general optimization algorithm, compatible with any model that learns through gradient descent.[16]

Reptile

[edit]

Reptile is a remarkably simple meta-learning optimization algorithm, given that both of its components rely on meta-optimization through gradient descent and both are model-agnostic.[17]

Examples

[edit]

Some approaches which have been viewed as instances of meta-learning:

  • Recurrent neural networks (RNNs) are universal computers. In 1993, Jürgen Schmidhuber showed how "self-referential" RNNs can in principle learn by backpropagation to run their own weight change algorithm, which may be quite different from backpropagation.[18] In 2001, Sepp Hochreiter & A.S. Younger & P.R. Conwell built a successful supervised meta-learner based on Long short-term memory RNNs. It learned through backpropagation a learning algorithm for quadratic functions that is much faster than backpropagation.[19][2] Researchers at Deepmind (Marcin Andrychowicz et al.) extended this approach to optimization in 2017.[20]
  • In the 1990s, Meta Reinforcement Learning or Meta RL was achieved in Schmidhuber's research group through self-modifying policies written in a universal programming language that contains special instructions for changing the policy itself. There is a single lifelong trial. The goal of the RL agent is to maximize reward. It learns to accelerate reward intake by continually improving its own learning algorithm which is part of the "self-referential" policy.[21][22]
  • An extreme type of Meta Reinforcement Learning is embodied by the G?del machine, a theoretical construct which can inspect and modify any part of its own software which also contains a general theorem prover. It can achieve recursive self-improvement in a provably optimal way.[23][2]
  • Model-Agnostic Meta-Learning (MAML) was introduced in 2017 by Chelsea Finn et al.[16] Given a sequence of tasks, the parameters of a given model are trained such that few iterations of gradient descent with few training data from a new task will lead to good generalization performance on that task. MAML "trains the model to be easy to fine-tune."[16] MAML was successfully applied to few-shot image classification benchmarks and to policy-gradient-based reinforcement learning.[16]
  • Variational Bayes-Adaptive Deep RL (VariBAD) was introduced in 2019.[24] While MAML is optimization-based, VariBAD is a model-based method for meta reinforcement learning, and leverages a variational autoencoder to capture the task information in an internal memory, thus conditioning its decision making on the task.
  • When addressing a set of tasks, most meta learning approaches optimize the average score across all tasks. Hence, certain tasks may be sacrificed in favor of the average score, which is often unacceptable in real-world applications. By contrast, Robust Meta Reinforcement Learning (RoML) focuses on improving low-score tasks, increasing robustness to the selection of task.[25] RoML works as a meta-algorithm, as it can be applied on top of other meta learning algorithms (such as MAML and VariBAD) to increase their robustness. It is applicable to both supervised meta learning and meta reinforcement learning.
  • Discovering meta-knowledge works by inducing knowledge (e.g. rules) that expresses how each learning method will perform on different learning problems. The metadata is formed by characteristics of the data (general, statistical, information-theoretic,... ) in the learning problem, and characteristics of the learning algorithm (type, parameter settings, performance measures,...). Another learning algorithm then learns how the data characteristics relate to the algorithm characteristics. Given a new learning problem, the data characteristics are measured, and the performance of different learning algorithms are predicted. Hence, one can predict the algorithms best suited for the new problem.
  • Stacked generalisation works by combining multiple (different) learning algorithms. The metadata is formed by the predictions of those different algorithms. Another learning algorithm learns from this metadata to predict which combinations of algorithms give generally good results. Given a new learning problem, the predictions of the selected set of algorithms are combined (e.g. by (weighted) voting) to provide the final prediction. Since each algorithm is deemed to work on a subset of problems, a combination is hoped to be more flexible and able to make good predictions.
  • Boosting is related to stacked generalisation, but uses the same algorithm multiple times, where the examples in the training data get different weights over each run. This yields different predictions, each focused on rightly predicting a subset of the data, and combining those predictions leads to better (but more expensive) results.
  • Dynamic bias selection works by altering the inductive bias of a learning algorithm to match the given problem. This is done by altering key aspects of the learning algorithm, such as the hypothesis representation, heuristic formulae, or parameters. Many different approaches exist.
  • Inductive transfer studies how the learning process can be improved over time. Metadata consists of knowledge about previous learning episodes and is used to efficiently develop an effective hypothesis for a new task. A related approach is called learning to learn, in which the goal is to use acquired knowledge from one domain to help learning in other domains.
  • Other approaches using metadata to improve automatic learning are learning classifier systems, case-based reasoning and constraint satisfaction.
  • Some initial, theoretical work has been initiated to use Applied Behavioral Analysis as a foundation for agent-mediated meta-learning about the performances of human learners, and adjust the instructional course of an artificial agent.[26]
  • AutoML such as Google Brain's "AI building AI" project, which according to Google briefly exceeded existing ImageNet benchmarks in 2017.[27][28]

References

  1. ^ a b c d e Schmidhuber, Jürgen (1987). "Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook" (PDF). Diploma Thesis, Tech. Univ. Munich.
  2. ^ a b c Schaul, Tom; Schmidhuber, Jürgen (2010). "Metalearning". Scholarpedia. 5 (6): 4650. Bibcode:2010SchpJ...5.4650S. doi:10.4249/scholarpedia.4650.
  3. ^ P. E. Utgoff (1986). "Shift of bias for inductive concept learning". In R. Michalski; J. Carbonell; T. Mitchell (eds.). Machine Learning: An Artificial Intelligence Approach. Morgan Kaufmann. pp. 163–190. ISBN 978-0-934613-00-2.
  4. ^ Bengio, Yoshua; Bengio, Samy; Cloutier, Jocelyn (1991). Learning to learn a synaptic rule (PDF). IJCNN'91.
  5. ^ Lemke, Christiane; Budka, Marcin; Gabrys, Bogdan (2025-08-04). "Metalearning: a survey of trends and technologies". Artificial Intelligence Review. 44 (1): 117–130. doi:10.1007/s10462-013-9406-y. ISSN 0269-2821. PMC 4459543. PMID 26069389.
  6. ^ Brazdil, Pavel; Carrier, Christophe Giraud; Soares, Carlos; Vilalta, Ricardo (2009). Metalearning - Springer. Cognitive Technologies. doi:10.1007/978-3-540-73263-1. ISBN 978-3-540-73262-4.
  7. ^ Gordon, Diana; Desjardins, Marie (1995). "Evaluation and Selection of Biases in Machine Learning" (PDF). Machine Learning. 20: 5–22. doi:10.1023/A:1022630017346. Retrieved 27 March 2020.
  8. ^ a b c d Weng, Lilian (30 November 2018). "Meta-Learning: Learning to Learn Fast". OpenAI Blog. Retrieved 27 October 2019.
  9. ^ Santoro, Adam; Bartunov, Sergey; Wierstra, Daan; Lillicrap, Timothy. "Meta-Learning with Memory-Augmented Neural Networks" (PDF). Google DeepMind. Retrieved 29 October 2019.
  10. ^ Munkhdalai, Tsendsuren; Yu, Hong (2017). "Meta Networks". Proceedings of Machine Learning Research. 70: 2554–2563. arXiv:1703.00837. PMC 6519722. PMID 31106300.
  11. ^ Koch, Gregory; Zemel, Richard; Salakhutdinov, Ruslan (2015). "Siamese Neural Networks for One-shot Image Recognition" (PDF). Toronto, Ontario, Canada: Department of Computer Science, University of Toronto.
  12. ^ Vinyals, O.; Blundell, C.; Lillicrap, T.; Kavukcuoglu, K.; Wierstra, D. (2016). "Matching networks for one shot learning" (PDF). Google DeepMind. Retrieved 3 November 2019.
  13. ^ Sung, F.; Yang, Y.; Zhang, L.; Xiang, T.; Torr, P. H. S.; Hospedales, T. M. (2018). "Learning to compare: relation network for few-shot learning" (PDF).
  14. ^ Snell, J.; Swersky, K.; Zemel, R. S. (2017). "Prototypical networks for few-shot learning" (PDF).
  15. ^ Ravi, Sachin; Larochelle, Hugo (2017). Optimization as a model for few-shot learning. ICLR 2017. Retrieved 3 November 2019.
  16. ^ a b c d Finn, Chelsea; Abbeel, Pieter; Levine, Sergey (2017). "Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks". arXiv:1703.03400 [cs.LG].
  17. ^ Nichol, Alex; Achiam, Joshua; Schulman, John (2018). "On First-Order Meta-Learning Algorithms". arXiv:1803.02999 [cs.LG].
  18. ^ Schmidhuber, Jürgen (1993). "A self-referential weight matrix". Proceedings of ICANN'93, Amsterdam: 446–451.
  19. ^ Hochreiter, Sepp; Younger, A. S.; Conwell, P. R. (2001). "Learning to Learn Using Gradient Descent". Proceedings of ICANN'01: 87–94.
  20. ^ Andrychowicz, Marcin; Denil, Misha; Gomez, Sergio; Hoffmann, Matthew; Pfau, David; Schaul, Tom; Shillingford, Brendan; de Freitas, Nando (2017). "Learning to learn by gradient descent by gradient descent". Proceedings of ICML'17, Sydney, Australia. arXiv:1606.04474.
  21. ^ Schmidhuber, Jürgen (1994). "On learning how to learn learning strategies" (PDF). Technical Report FKI-198-94, Tech. Univ. Munich.
  22. ^ Schmidhuber, Jürgen; Zhao, J.; Wiering, M. (1997). "Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement". Machine Learning. 28: 105–130. doi:10.1023/a:1007383707642.
  23. ^ Schmidhuber, Jürgen (2006). "G?del machines: Fully Self-Referential Optimal Universal Self-Improvers". In B. Goertzel & C. Pennachin, Eds.: Artificial General Intelligence: 199–226.
  24. ^ Zintgraf, Luisa; Schulze, Sebastian; Lu, Cong; Feng, Leo; Igl, Maximilian; Shiarlis, Kyriacos; Gal, Yarin; Hofmann, Katja; Whiteson, Shimon (2021). "VariBAD: Variational Bayes-Adaptive Deep RL via Meta-Learning". Journal of Machine Learning Research. 22 (289): 1–39. ISSN 1533-7928.
  25. ^ Greenberg, Ido; Mannor, Shie; Chechik, Gal; Meirom, Eli (2025-08-04). "Train Hard, Fight Easy: Robust Meta Reinforcement Learning". Advances in Neural Information Processing Systems. 36: 68276–68299.
  26. ^ Begoli, Edmon (May 2014). "Procedural-Reasoning Architecture for Applied Behavior Analysis-based Instructions". Doctoral Dissertations. Knoxville, Tennessee, USA: University of Tennessee, Knoxville: 44–79. Retrieved 14 October 2017.
  27. ^ "Robots Are Now 'Creating New Robots,' Tech Reporter Says". NPR.org. 2018. Retrieved 29 March 2018.
  28. ^ "AutoML for large scale image classification and object detection". Google Research Blog. November 2017. Retrieved 29 March 2018.
[edit]
牛剖层皮革是什么意思 蟑螂是什么样子的 什么是法西斯主义 三天不打上房揭瓦的下一句是什么 小腹凸起是什么原因
t是什么 高晓松为什么进监狱 食人鱼长什么样子 ada医学上是什么意思 豁出去了什么意思
hcg偏高是什么原因 羸弱是什么意思 肝内胆管结石有什么症状表现 wi-fi是什么意思 有情人终成眷属是什么意思
处理器是什么意思 客单价什么意思 春捂秋冻指的是什么意思 凿壁偷光告诉我们什么道理 想吐头晕是什么原因
室早是什么意思kuyehao.com 睡眠不好挂什么科门诊hcv9jop6ns9r.cn 氯吡格雷是什么药hcv8jop6ns7r.cn 脑白质脱髓鞘改变是什么意思dajiketang.com 孩子长个子吃什么有利于长高hcv7jop6ns3r.cn
黄什么鱼hcv8jop2ns5r.cn 没有什么hcv7jop5ns4r.cn 扁桃体肿大是什么原因引起的hcv9jop6ns8r.cn 粉荷花的花语是什么hcv8jop4ns2r.cn 血友病是什么liaochangning.com
谅解什么意思hcv8jop8ns7r.cn 一个土一个阜念什么hcv8jop0ns1r.cn 散光什么意思hcv9jop2ns2r.cn 00年属龙的是什么命hcv8jop2ns4r.cn 宫缩什么感觉bjhyzcsm.com
姑婆的儿子叫什么hcv9jop2ns5r.cn 脾胃虚寒吃什么水果好hcv9jop4ns2r.cn 腊月初六是什么星座hcv9jop1ns8r.cn 16年是什么年hcv8jop4ns4r.cn 月经一个月来两次是什么原因hcv7jop4ns7r.cn
百度