守望先锋是什么类型的游戏| 龙骨是什么骨头| 包含是什么意思| 药敏试验是什么意思| 鸡蛋补充什么营养| 1.25什么星座| 内分泌紊乱是什么症状| edg是什么意思| 一年一片避孕药叫什么| 胸闷想吐是什么原因| 比是什么意思| 筱是什么意思| 45是什么生肖| 男的纹般若有什么寓意| 人尽可夫是什么意思| 95511是什么号码| oink是什么意思| 新生儿血糖低是什么原因| 胃食管反流病吃什么药| 老说梦话是什么原因| cpi是什么意思| 甘胆酸偏高是什么原因| 老人助听器什么牌子好| 耳根疼是什么原因| 悦人不如悦己什么意思| 甲是什么生肖| 小病不治下一句是什么| 骑驴找马什么意思| 手指甲空了是什么原因| 高血压突然变成低血压是什么原因| 眩晕症吃什么药好| 遗精是什么感觉| 什么是动态口令| 野什么意思| 柿子不能和什么同吃| 彩金和黄金有什么区别| 左眼皮上有痣代表什么| 羊脑炎什么症状怎么治| 来月经为什么会拉肚子| 血脂高能吃什么水果| 离婚需要什么资料| 敬谢不敏是什么意思| 左膝关节退行性变是什么意思| 银手镯发黄是什么原因| kate是什么意思| 宝宝什么时候添加辅食最好| 日月星辰下一句是什么| 宫外孕是什么意思| 鹏字五行属什么| 什么杯子不能装水| 喝蜂蜜水对身体有什么好处| 钢琴八级是什么水平| 阑尾切除后有什么影响和后遗症| 间歇性跛行是什么意思| mj是什么意思| 跖疣用什么药膏能治好| 什么器晚成| 什么是沉没成本| 属马与什么属相最配| 什么鱼有毒| 手指起水泡是什么原因| 不假思索的假是什么意思| 7.13是什么日子| 静脉曲张 看什么科| 叉烧肉是什么肉| 梦见卖衣服是什么意思| 女生肚脐眼下面疼是什么原因| 理疗和按摩有什么区别| 世界上最大的海是什么海| 财神是什么生肖| 筑基是什么意思| 三个土字念什么字| jimmy是什么意思| 深圳市市长是什么级别| 打饱嗝是什么病的前兆| 蟑螂怕什么| p波增宽什么意思| 身份证最后一位代表什么| 佛家思想的核心是什么| 卵巢分泌什么激素| 正常人吃叶酸有什么好处| ariel是什么意思| 脸颊两侧长痘痘什么原因| 为什么不建议打水光针| 什么样的人爱长结节| 波罗蜜是什么意思| 什么的叫| 梦到甘蔗代表什么预兆| 皮肤属于什么系统| macd什么意思| 梦见狮子是什么预兆| 6岁属什么生肖| 早晨嘴苦是什么原因引起的| 蛋白尿是什么颜色| 姥姥的妈妈叫什么| 11月2日什么星座| 狗是什么时辰| 什么是婚姻| 9点是什么时辰| 波子是什么车| 魈是什么意思| 什么是不动产权证| 铜镯子对人有什么好处| baron是什么意思| 最大的哺乳动物是什么| 中国民间为什么要吃腊八粥| 学字五行属什么| pocky是什么意思| 水滴鱼长什么样子| 乳头瘤有什么症状| 辛弃疾字什么| 热疙瘩用什么药膏| 孕妇可以吃什么水果| 三元及第是什么意思| 吃了狗肉不能吃什么| 老人脚肿是什么原因| 聪明的人有什么特征| 突然勃不起来是什么原因| 熟褐色是什么颜色| 什么| 初伏是什么意思| 上颚痒是什么原因| 隆胸有什么危害和后遗症吗| 后年是什么年| lr是什么| 狗狗咳嗽吃什么药| 肌肉型肥胖是什么意思| 舌苔发黑是什么原因| 辩证是什么意思| 遗留是什么意思| 化学专业学什么| 金银花什么时候采摘最好| 恐龙蛋是什么水果| 做透析是什么病| 做梦手机坏了什么预兆| 意思是什么意思| 尾骨疼是什么原因| 秉字五行属什么| 什么盐比较好| 非你不可什么意思| 上海的市花是什么花| 根充是什么意思| 什么什么一窝| yair是什么牌子的空调| 毒鸡汤是什么意思| 2024年属什么| 扶阳是什么意思| 什么是梅雨季节| 立春之后是什么节气| 手痒脱皮是什么原因| 米糊是什么| 亲吻是什么意思| 喝什么解酒快| 家里飞蛾多是什么原因| 肝的功能是什么| 什么叫阈值| 胖头鱼又叫什么鱼| 月经不调挂什么科室| 淋巴滤泡增生用什么药能彻底治愈| 女孩子喜欢什么礼物| 三百多分能上什么大学| 血脂高会导致什么后果| 为什么会尿道感染| 老是头晕是什么原因| 有白带发黄是什么原因| 班别是什么意思| 年轻人白头发是什么原因引起的| 什么是癔症| 如意是干什么用的| 7月16日什么星座| 最毒妇人心是什么意思| 熬夜有什么危害| 糖类抗原199是什么意思| 火车代表什么生肖| 枸杞喝多了有什么坏处| 什么木头做菜板好| 孜孜不倦是什么意思| 车厘子什么季节成熟| 耳朵长痣代表什么| chihiro是什么意思| 董事长是什么职位| 缺如是什么意思| 属龙的守护神是什么菩萨| 浅表性胃炎吃什么药| 5月4号是什么星座| 白玉是什么玉| 脂肪酶高是什么原因| 什么是重金属| 月经第三天属于什么期| 为什么大医院不用宫腔镜人流| 糖醋鱼用什么鱼做好吃| 什么是红外线| 格桑花是什么意思| 小鱼吃什么食物| 检验科是做什么的| 碱性磷酸酶高是什么意思| 为什么小孩子经常流鼻血| 泄愤是什么意思| 什么繁什么茂| 龈颊沟在什么位置图片| 午饭吃什么| 国企是什么意思| 苦衷是什么意思| 喘气费劲是什么原因| 风水是什么意思| 海带与什么食物相克| 小孩便秘是什么原因引起的| 12月15是什么星座| 吃什么会长胖| md鞋底是什么材质| 风水宝地是什么生肖| 脚心发痒是什么原因| 超凡脱俗是什么意思| 养老院和敬老院有什么区别| 白内障有什么症状| 268是什么意思| 赤茯苓又叫什么| 妊娠是什么意思| 蓝颜知己什么意思| 舌苔腻是什么意思| 痛风看什么科| 酱牛肉放什么调料| 牙龈肿痛吃什么药效果好| 血沉偏高说明什么| 陪衬是什么意思| 怀孕哭对宝宝有什么影响| 嘴唇发黑是什么原因| 四物汤什么时候喝| 偏印代表什么| 星座之王是什么座| 官鬼是什么意思| 咳嗽可以吃什么| 毛毛虫吃什么食物| or发什么音| 鸡蛋和什么搭配最营养| 政协委员是什么级别| 道德绑架什么意思| 拔牙什么时候拔最好| 狗改不了吃屎是什么意思| 什么是黄疸| 知了有什么功效与作用| 9月份是什么星座的| 形婚是什么| 吃什么都咸是什么原因| sub是什么意思| 肌肉跳动是什么原因| generic是什么意思| 吃什么升血压最快| 风湿是什么原因引起的| 八九年属什么| 女生适合什么工作| 气垫是什么| 豫字五行属什么| 语文是什么意思| 什么是达人| 巴旦木是什么树的果实| s代表什么| 胃溃疡是什么原因引起的| 靴型心见于什么病| 月可以加什么偏旁| d2聚体高是什么意思| 酸奶用什么菌发酵| 右肺下叶纤维灶是什么意思| 胃痛吃什么药| 百度Jump to content

杨洁追悼会举行时间和地点曝光 家人低调处理后事

From Wikipedia, the free encyclopedia
百度 对比之下,有分析认为,养老金问题今年不那么“热”的背后,是人们对社保权益更有信心了。

A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation.

The largest and most capable LLMs are generative pretrained transformers (GPTs), which are largely used in generative chatbots such as ChatGPT, Gemini or Claude. LLMs can be fine-tuned for specific tasks or guided by prompt engineering.[1] These models acquire predictive power regarding syntax, semantics, and ontologies[2] inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained in.[3]

History

[edit]
The training compute of notable large models in FLOPs vs publication date over the period 2010–2024. For overall notable models (top left), frontier models (top right), top language models (bottom left) and top models within leading companies (bottom right). The majority of these models are language models.
The training compute of notable large AI models in FLOPs vs publication date over the period 2017–2024. The majority of large models are language models or multimodal models with language capacity.

Before the emergence of transformer-based models in 2017, some language models were considered large relative to the computational and data constraints of their time. In the early 1990s, IBM's statistical models pioneered word alignment techniques for machine translation, laying the groundwork for corpus-based language modeling. A smoothed n-gram model in 2001, such as those employing Kneser-Ney smoothing, trained on 300 million words achieved state-of-the-art perplexity on benchmark tests at the time.[4] During the 2000s, with the rise of widespread internet access, researchers began compiling massive text datasets from the web ("web as corpus"[5]) to train statistical language models.[6][7]

Moving beyond n-gram models, researchers started in 2000 to use neural networks to learn language models.[8] Following the breakthrough of deep neural networks in image classification around 2012,[9] similar architectures were adapted for language tasks. This shift was marked by the development of word embeddings (eg, Word2Vec by Mikolov in 2013) and sequence-to-sequence (seq2seq) models using LSTM. In 2016, Google transitioned its translation service to neural machine translation (NMT), replacing statistical phrase-based models with deep recurrent neural networks. These early NMT systems used LSTM-based encoder-decoder architectures, as they preceded the invention of transformers.

An illustration of main components of the transformer model from the original paper, where layers were normalized after (instead of before) multiheaded attention

At the 2017 NeurIPS conference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 seq2seq technology,[10] and was based mainly on the attention mechanism developed by Bahdanau et al. in 2014.[11] The following year in 2018, BERT was introduced and quickly became "ubiquitous".[12] Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks via prompting.[13]

Although decoder-only GPT-1 was introduced in 2018, it was GPT-2 in 2019 that caught widespread attention because OpenAI claimed to have initially deemed it too powerful to release publicly, out of fear of malicious use.[14] GPT-3 in 2020 went a step further and as of 2025 is available only via API with no offering of downloading the model to execute locally. But it was the 2022 consumer-facing chatbot ChatGPT that received extensive media coverage and public attention.[15] The 2023 GPT-4 was praised for its increased accuracy and as a "holy grail" for its multimodal capabilities.[16] OpenAI did not reveal the high-level architecture and the number of parameters of GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work.[13] In 2024 OpenAI released the reasoning model OpenAI o1, which generates long chains of thought before returning a final answer.[17] Many LLMs with parameter counts comparable to those of OpenAI's GPT series have been developed.[18]

Since 2022, source-available models have been gaining popularity, especially at first with BLOOM and LLaMA, though both have restrictions on the field of use. Mistral AI's models Mistral 7B and Mixtral 8x7b have the more permissive Apache License. In January 2025, DeepSeek released DeepSeek R1, a 671-billion-parameter open-weight model that performs comparably to OpenAI o1 but at a much lower cost.[19]

Since 2023, many LLMs have been trained to be multimodal, having the ability to also process or generate other types of data, such as images or audio. These LLMs are also called large multimodal models (LMMs).[20]

As of 2024, the largest and most capable models are all based on the transformer architecture. Some recent implementations are based on other architectures, such as recurrent neural network variants and Mamba (a state space model).[21][22][23]

Dataset preprocessing

[edit]

Tokenization

[edit]

As machine learning algorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, an embedding is associated to the integer index. Algorithms include byte-pair encoding (BPE) and WordPiece. There are also special tokens serving as control characters, such as [MASK] for masked-out token (as used in BERT), and [UNK] ("unknown") for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, "?" denotes a preceding whitespace in RoBERTa and GPT. "##" denotes continuation of a preceding word in BERT.[24]

For example, the BPE tokenizer used by GPT-3 (Legacy) would split tokenizer: texts -> series of numerical "tokens" as

token izer :  texts  -> series  of  numerical  " t ok ens "

Tokenization also compresses the datasets. Because LLMs generally require input to be an array that is not jagged, the shorter texts must be "padded" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset.[25][26]

BPE

[edit]

As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks and punctuation marks) are treated as an initial set of n-grams (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) n-grams that most frequently occur together are then again merged into even lengthier n-gram, until a vocabulary of prescribed size is obtained (in case of GPT-3, the size is 50257).[27] After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams.[28]

Problems

[edit]

A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. However, an average word in another language encoded by such an English-optimized tokenizer is split into a suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for the Shan language from Myanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English.[26]

Greedy tokenization also causes subtle problems with text completion.[29]

Dataset cleaning

[edit]

In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data.[30] Cleaned datasets can increase training efficiency and lead to improved downstream performance.[31][32] A trained LLM can be used to clean datasets for training a further LLM.[33]

With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it).[1]

Synthetic data

[edit]

Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft's Phi series of LLMs is trained on textbook-like data generated by another LLM.[34]

Training

[edit]
Training workflow of original ChatGPT/InstructGPT release.[35][36]

An LLM is a type of foundation model (large X model) trained on language.[37] LLMs can be trained in different ways. In particular, GPT models are first pretrained to predict the next word on a large amount of data, before being fine-tuned.[38]

Pre-training cost

[edit]

The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve. GPT-1 of 2018 is usually considered the first LLM, even though it has only 117 million parameters. The tendency towards larger models is visible in the list of large language models.

As technology advanced, large sums have been invested in increasingly large models. Substantial infrastructure is necessary for training the largest models.[39][40][41] For example, the training of GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million.[42]

For Transformer-based LLM, training cost is much higher than inference cost. It costs 6 FLOPs per parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token.[43]:?§2.1; Table 1?

Fine-tuning

[edit]

Before being fine-tuned, most LLMs are next-token predictors. The fine-tuning can make LLM adopt a conversational format where they play the role of the assistant.[44] Techniques like reinforcement learning from human feedback (RLHF) or constitutional AI can be used to instill human preferences and make LLMs more "helpful, honest, and harmless".[45][44]

Instruction fine-tuning

[edit]

Instruction fine-tuning is a form of supervised learning used to teach LLMs to follow instructions.[44] In 2021, Google Research released FLAN, a new model fine-tuned to follow a wide range of instructions. It could perform a task given a verbal instruction without needing any examples.[46] In 2022, OpenAI demonstrated InstructGPT, a version of GPT-3.5 similarly fine-tuned to follow instructions. Instead of completing the sentence (e.g. following the instruction "Write an essay about the main themes represented in Hamlet" with "If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay" based on the frequency of this textual sequence in the corpus), the instruction-following models have a preference to actually act on the instruction.[44]

Reinforcement learning from human feedback

[edit]

RLHF involves training a reward model to predict which text humans prefer. Then, the LLM can be fine-tuned through reinforcement learning to better satisfy this reward model. Since humans typically prefer truthful, helpful and harmless answers, RLHF favors such answers.[44]

Architecture

[edit]

LLMs are generally based on the transformer architecture, which leverages an attention mechanism that enables the model to process relationships between all elements in a sequence simultaneously, regardless of their distance from each other.[47]

Attention mechanism and context window

[edit]
When each head calculates, according to its own criteria, how much other tokens are relevant for the "it_" token, note that the second attention head, represented by the second column, is focusing most on the first two rows, i.e. the tokens "The" and "animal", while the third column is focusing most on the bottom two rows, i.e. on "tired", which has been tokenized into two tokens.[48]

In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized) GPT-2 model has had twelve attention heads and a context window of only 1k tokens.[49] In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized.[28]

The largest models, such as Google's Gemini 1.5, presented in February 2024, can have a context window sized up to 1 million (context window of 10 million was also "successfully tested").[50] Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens.[51] Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller. For example, the GPT-4 Turbo model has a maximum output of 4096 tokens.[52]

Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example with ChatGPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation.

The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them is a matter of experimentation and domain-specific considerations.

A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset.[53] It can be either

  • autoregressive (i.e. predicting how the segment continues, as GPTs do): for example given a segment "I like to eat", the model predicts "ice cream", or "sushi".
  • "masked" (i.e. filling in the parts missing from the segment, the way "BERT"[54] does it): for example, given a segment "I like to [__] [__] cream", the model predicts that "eat" and "ice" are missing.

Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus.[54] During training, regularization loss is also used to stabilize training. However regularization loss is usually not used during testing and evaluation.

Mixture of experts

[edit]

A mixture of experts (MoE) is a machine learning architecture in which multiple specialized neural networks ("experts") work together, with a gating mechanism that routes each input to the most appropriate expert(s). Mixtures of experts can reduce inference costs, as only a fraction of the parameters are used for each input. The approach was introduced in 2017 by Google researchers.[55][56][57]

Parameter size

[edit]

Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics.[58]

Quantization

[edit]

Post-training quantization[59] aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance.[60][61] Quantization can be further classified as static quantization if the quantization parameters are determined beforehand (typically during a calibration phase), and dynamic quantization if the quantization is applied during inference. The simplest form of quantization simply truncates all the parameters to a given number of bits: this is applicable to static as well as dynamic quantization, but loses much precision. Dynamic quantization allows for the use of a different quantization codebook per layer, either a lookup table of values or a linear mapping (scaling factor and bias), at the cost of foregoing the possible speed improvements from using lower-precision arithmetic.[62]

Quantized models are typically seen as frozen with modification of weights (e.g. fine-tuning) only applied to the original model. However, it is still possible to fine-tune quantized models using low-rank adaptation.[63] Furthermore, more advanced methods to reduce precision loss from quantized models also need a training-like step:[64]

  • Quantization-aware training (QAT, 2020) adds a representation of quantization loss to the training of the parent network, which can be improved using ordinary backpropagation. It is expensive to train but effective on a wide range of models, not only LLMs.[65]
  • GPT Quantization (GPTQ, 2022) minimizes the squared error of each layer's output given a limited choice of possible values for weights.
  • Activation-aware quantization (AWQ, 2023) keeps the most important weights in fp16. Sparse-Quantized Representation (SpQR) also keeps the particularly important parameters ("outlier weights") in higher precision.[66]
  • Unsloth's "dynamic" method (2024), not to be confused with the dynamic quantization from above, selects important layers for keeping in higher-precision.[67]
  • Distilled weight quantization (DWQ, 2025) from Apple uses distillation to find good scaling factors and biases.

Extensibility

[edit]

Beyond basic text generation, various techniques have been developed to extend LLM capabilities, including the use of external tools and data sources, improved reasoning on complex problems, and enhanced instruction-following or autonomy through prompting methods.

Prompt engineering

[edit]

In 2020, OpenAI researchers demonstrated that their new model GPT-3 could understand what format to use given a few rounds of Q and A (or other type of task) in the input data as example, thanks in part due to the RLHF technique. This technique, called few-shot prompting, allows LLMs to be adapted to any task without requiring fine-tuning.[1] Also in 2022, it was found that the base GPT-3 model can generate an instruction based on user input. The generated instruction along with user input is then used as input to another instance of the model under a "Instruction: [...], Input: [...], Output:" format. The other instance is able to complete the output and often produces the correct answer in doing so. The ability to "self-instruct" makes LLMs able to bootstrap themselves toward a correct answer.[68]

Dialogue processing (chatbot)

[edit]

An LLM can be turned into a chatbot or a "dialog assistant" by specializing it for conversation. In essence, user input is prefixed with a marker such as "Q:" or "User:" and the LLM is asked to predict the output after a fixed "A:" or "Assistant:". This type of model became commercially available in 2022 with ChatGPT, a sibling model of InstructGPT fine-tuned to accept and produce dialog-formatted text based on GPT-3.5. It could similarly follow user instructions.[69] Before the stream of User and Assistant lines, a chat context usually start with a few lines of overarching instructions, from a role called "developer" or "system" to convey a higher authority than the user's input. This is called a "system prompt".[70][71]

Retrieval-augmented generation

[edit]

Retrieval-augmented generation (RAG) is an approach that enhances LLMs by integrating them with document retrieval systems. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a vector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents.[72]

Tool use

[edit]

Tool use is a mechanism that enables LLMs to interact with external systems, applications, or data sources. It can allow for example to fetch real-time information from an API or to execute code. A program separate from the LLM watches the output stream of the LLM for a special tool-calling syntax. When these special tokens appear, the program calls the tool accordingly and feeds its output back into the LLM's input stream.[73]

Early tool-using LLMs were fine-tuned on the use of specific tools. But fine-tuning LLMs for the ability to read API documentation and call API correctly has greatly expanded the range of tools accessible to an LLM.[74][75] Describing available tools in the system prompt can also make an LLM able to use tools. A system prompt instructing ChatGPT (GPT-4) to use multiple types of tools can be found online.[76]

Memory

[edit]

An LLM only has access to the current conversation, but it can be given long-term memory as an external tool. Memory formation happens when the LLM calls the tool to write to the external storage. Retrieval can happen as a full context injected into the start of every conversation, or as another "tool" that is called on demand. The retrieval tool can be based on a simple key-value store or based on semantic search like retrieval-augmented generation.[77]

Agency

[edit]

An LLM is typically not an autonomous agent by itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions. But it can be transformed into an agent by adding supporting elements: the role (profile) and the surrounding environment of an agent can be additional inputs to the LLM, while memory can be integrated as a tool or provided as additional input. Instructions and input patterns are used to make the LLM plan actions and tool use is used to potentially carry out these actions.[78]

The ReAct pattern, a portmanteau of "Reason + Act", constructs an agent out of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment.[79]

In the DEPS ("Describe, Explain, Plan and Select") method, an LLM is first connected to the visual world via image descriptions. It is then prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and the environmental feedback it receives.[80]

The Reflexion method[81] constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are stored as a form of long-term memory and given to the agent in the subsequent episodes.[81]

Monte Carlo tree search can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model.[82]

For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent.[83] Alternatively, it can propose increasingly difficult tasks for curriculum learning.[84] Instead of outputting individual actions, an LLM planner can also construct "skills", or functions for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning.[84]

Multiple agents with memory can interact socially.[85]

Reasoning

[edit]

LLMs are conventionally trained to generate an output without generating intermediate steps. As a result their performance tends to be subpar on complex questions requiring (at least in humans) intermediate steps of thought. This deficiency has been overcome by breaking down the tasks into smaller steps for the LLM either manually or automatically.

Chaining

[edit]

The "prompt chaining" paradigm was published in 2021.[86] In this method, a user manually breaks a complex problem down into several steps. In each step, the LLM receives as input a prompt telling it what to do and some results from preceeding steps. The result from one step is then reused in a next step, until a final answer is reached. The ability of an LLM to follow instructions means that even non-experts can write a successful collection of step-wise prompts given a few rounds of trial and error.[87][88]

A 2022 paper demonstrated a separate technique called "Chain-of-Thought Prompting", which makes the LLM break the question down autonomously. An LLM is given some examples where the "assistant" verbally breaks down the thought process before arriving at an answer. The LLM mimics these examples and also tries to spend some time generating intermediate steps before providing the final answer. This additional step elicited by prompting improves the correctness of the LLM on relatively complex questions. On math word questions, a prompted model can exceed even fine-tuned GPT-3 with a verifier.[86][89] Chain-of-thought can also be elicited by simply adding an instruction like "Let's think step by step" to the prompt, in order to encourage the LLM to proceed methodically instead of trying to directly guess the answer.[90]

Model-native reasoning

[edit]

In late 2024, a new direction emerged in LLM development with models specifically designed for complex reasoning tasks. These "reasoning models" were trained to spend more time generating step-by-step solutions before providing final answers, similar to human problem-solving processes.[91] OpenAI introduced this trend with their o1 model in September 2024, followed by o3 in April 2025. These models showed significant improvements in mathematics, science, and coding tasks compared to traditional LLMs. For example, on the International Mathematics Olympiad qualifying exam problems, GPT-4o achieved 13% accuracy while o1 reached 83%.[91][92]

In January 2025, the Chinese company DeepSeek released DeepSeek-R1, a 671-billion-parameter open-weight reasoning model that achieved comparable performance to OpenAI's o1 while being significantly more cost-effective to operate. Unlike proprietary models from OpenAI, DeepSeek-R1's open-weight nature allowed researchers to study and build upon the algorithm, though its training data remained private.[93]

These reasoning models typically require more computational resources per query compared to traditional LLMs, as they perform more extensive processing to work through problems step-by-step. However, they have shown superior capabilities in domains requiring structured logical thinking, such as mathematics, scientific research, and computer programming.[92]

Forms of input and output

[edit]

Multimodality

[edit]

Multimodality means having multiple modalities, where a "modality" refers to a type of input or output, such as video, image, audio, text, proprioception, etc.[94] For example, Google PaLM model was fine-tuned into a multimodal model and applied to robotic control.[95] LLaMA models have also been turned multimodal using the tokenization method, to allow image inputs,[96] and video inputs.[97] GPT-4o can process and generate text, audio and images.[98] Such models are sometimes called large multimodal models (LMMs).[99]

A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder . Make a small multilayered perceptron , so that for any image , the post-processed vector has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability.[100] The model Flamingo demonstrated in 2022 the effectiveness of the tokenization method, fine-tuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch.[101]

Non-natural languages

[edit]

LLMs can handle programming languages similarly to how they handle natural languages. No special change in token handling is needed as code, like human language, is represented as plain text. LLMs can generate code based on problems or instructions written in natural language. They can also describe code in natural language or translate between programming languages. They were originally used as a code completion tool, but advances have moved them towards automatic programming. Services such as GitHub Copilot offer LLMs specifically trained, fine-tuned, or prompted for programming.[102][103]

LLM architectures have also proven useful in analyzing biological sequences: protein, DNA, and RNA. With proteins they appear able to capture a degree of "grammar" from the amino-acid sequence, condensing a sequence into an embedding. On tasks such as structure prediction and mutational outcome prediction, a small model using an embedding as input can approach or exceed much larger models using multiple sequence alignments (MSA) as input.[104] ESMFold, Meta Platforms' embedding-based method for protein structure prediction, runs an order of magnitude faster than AlphaFold2 thanks to the removal of an MSA requirement and a lower parameter count due to the use of embeddings.[105] Meta hosts ESM Atlas, a database of 772 million structures of metagenomic proteins predicted using ESMFold.[106] An LLM can also design proteins unlike any seen in nature.[107] Nucleic acid models have proven useful in detecting regulatory sequences,[108] sequence classification, RNA-RNA interaction prediction, and RNA structure prediction.[109]

Properties

[edit]

Scaling laws

[edit]

The performance of an LLM after pretraining largely depends on the:

  • cost of pretraining (the total amount of compute used),
  • size of the artificial neural network itself, such as number of parameters (i.e. amount of neurons in its layers, amount of weights between them and biases),
  • size of its pretraining dataset (i.e. number of tokens in corpus, ).

"Scaling laws" are empirical statistical laws that predict LLM performance based on such factors. One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with a log-log learning rate schedule, states that:[110] where the variables are

  • is the cost of training the model, in FLOPs.
  • is the number of parameters in the model.
  • is the number of tokens in the training set.
  • is the average negative log-likelihood loss per token (nats/token), achieved by the trained LLM on the test dataset.

and the statistical hyper-parameters are

  • , meaning that it costs 6 FLOPs per parameter to train on one token. Note that training cost is much higher than inference cost, where it costs 1 to 2 FLOPs per parameter to infer on one token.[43]

Emergent abilities

[edit]

At point(s) referred to as breaks,[111] the lines change their slopes, appearing on a linear-log plot as a series of linear segments connected by arcs.

Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)"[111] in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities".[112][113] They arise from the complex interaction of the model's components and are not explicitly programmed or designed.[114]

Furthermore, recent research has demonstrated that AI systems, including large language models, can employ heuristic reasoning akin to human cognition. They balance between exhaustive logical processing and the use of cognitive shortcuts (heuristics), adapting their reasoning strategies to optimize between accuracy and effort. This behavior mimics principles of resource-rational human cognition, as discussed in classical theories of bounded rationality and dual-process theory.[115]

One of the emergent abilities is in-context learning from example demonstrations.[116] In-context learning is involved in tasks, such as:

  • reported arithmetics
  • decoding the International Phonetic Alphabet
  • unscrambling a word's letters
  • disambiguating word-in-context datasets[112][117][118]
  • converting spatial words
  • cardinal directions (for example, replying "northeast" in response to a 3x3 grid of 8 zeros and a 1 in the top-right), color terms represented in text.[119]
  • chain-of-thought prompting: In a 2022 research paper, chain-of-thought prompting only improved the performance for models that had at least 62B parameters. Smaller models perform better when prompted to answer immediately, without chain of thought.[120]
  • identifying offensive content in paragraphs of Hinglish (a combination of Hindi and English), and generating a similar English equivalent of Kiswahili proverbs.[121]

Schaeffer et. al. argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a smooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well.[122]

Let be the number of parameter count, and be the performance of the model.

  • When , then is an exponential curve (before it hits the plateau at one), which looks like emergence.
  • When , then the plot is a straight line (before it hits the plateau at zero), which does not look like emergence.
  • When , then is a step-function, which looks like emergence.

Interpretation

[edit]

Large language models are typically regarded as black boxes, and it is not clear how they can perform linguistic tasks. Similarly, it is unclear if or how LLMs should be viewed as models of the human brain and/or human mind.[123]

Mechanistic interpretability

[edit]

Various techniques have been developed to enhance the transparency and interpretability of LLMs. Mechanistic interpretability aims to reverse-engineer LLMs by discovering symbolic algorithms that approximate the inference performed by an LLM. In recent years, sparse coding models such as sparse autoencoders, transcoders, and crosscoders have emerged as promising tools for identifying interpretable features.

For instance, the authors trained small transformers on modular arithmetic addition. The resulting models were reverse-engineered, and it turned out they used discrete Fourier transform.[124] The training of the model also highlighted a phenomenon called grokking, in which the model initially memorizes all the possible results in the training set (overfitting), and later suddenly learns to actually perform the calculation.[125]

Transcoders, which are more interpretable than transformers, have been utilized to develop "replacement models". In one such study involving the mechanistic interpretation of writing a rhyming poem by an LLM, it was shown that although they are believed to simply predict the next token, they can, in fact, plan ahead.[126]

By integrating these techniques, researchers and practitioners can gain deeper insights into the operations of LLMs, fostering trust and facilitating the responsible deployment of these powerful models.

Understanding and intelligence

[edit]

NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense".[127] Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to "understand" certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system": "Can one reasonably say that a system that passes exams for software engineering candidates is not really intelligent?"[128][129] Ilya Sutskever argues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation.[130] Some researchers characterize LLMs as "alien intelligence".[131][132] For example, Conjecture CEO Connor Leahy considers untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding."[133][134]

In contrast, some skeptics of LLM understanding believe that existing LLMs are "simply remixing and recombining existing writing",[132] a phenomenon known as stochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability.[127] For example, GPT-4 has natural deficits in planning and in real-time learning.[129] Generative LLMs have been observed to confidently assert claims of fact which do not seem to be justified by their training data, a phenomenon which has been termed "hallucination".[135] Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input.[136] Neuroscientist Terrence Sejnowski has argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate".[127]

Efforts to reduce or compensate for hallucinations have employed automated reasoning, RAG (retrieval-augmented generation), fine-tuning, and other methods.[137]

The matter of LLM's exhibiting intelligence or understanding has two main aspects – the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human like language.[127] These aspects of language as a model of cognition have been developed in the field of cognitive linguistics. American linguist George Lakoff presented Neural Theory of Language (NTL)[138] as a computational basis for using language as a model of learning tasks and understanding. The NTL Model outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titled The Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologist Vyvyan Evans mapped out the role of probabilistic context-free grammar (PCFG) in enabling NLP to model cognitive patterns and generate human like language.[139][140]

Evaluation

[edit]

Perplexity

[edit]

The canonical measure of the performance of any language model is its perplexity on a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token.

Here, is the number of tokens in the text corpus, and "context for token " depends on the specific type of LLM. If the LLM is autoregressive, then "context for token " is the segment of text appearing before token . If the LLM is masked, then "context for token " is the segment of text surrounding token .

Because language models may overfit to training data, models are usually evaluated by their perplexity on a test set.[54] This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set.[141]

Measures

[edit]

In information theory, the concept of entropy is intricately linked to perplexity, a relationship notably established by Claude Shannon.[142] This relationship is mathematically expressed as .

Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization.

Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word.

In the evaluation and comparison of language models, cross-entropy is generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions.

Due to their ability to accurately predict the next token, LLMs are highly capable in lossless compression. A 2023 study by DeepMind showed that the model Chinchilla, despite being trained primarily on text, was able to compress ImageNet to 43% of its size, beating PNG with 58%.[143]

Benchmarks

[edit]

Benchmarks are used to evaluate LLM performance on specific tasks. Tests evaluate capabilities such as general knowledge, bias, commonsense reasoning, question answering, and mathematical problem-solving. Composite benchmarks examine multiple capabilities. Results are often sensitive to the prompting method.[144][145]

A question answering benchmark is termed "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be combined with text that includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."[146]). Otherwise, the task is considered "closed book", and the model must draw solely on its training.[147] Examples include GLUE, SuperGLUE, MMLU, BIG-bench, HELM, and HLE (Humanity's Last Exam).[142][147]

LLM bias may be assessed through benchmarks such as CrowS-Pairs (Crowdsourced Stereotype Pairs),[148] Stereo Set,[149] and Parity Benchmark.[150]

Fact-checking and misinformation detection benchmarks are available. A 2023 study compared the fact-checking accuracy of LLMs including ChatGPT 3.5 and 4.0, Bard, and Bing AI against independent fact-checkers such as PolitiFact and Snopes. The results demonstrated moderate proficiency, with GPT-4 achieving the highest accuracy at 71%, lagging behind human fact-checkers.[151]

An earlier standard tested using a portion of the evaluation dataset. It became more common to evaluate a pre-trained model directly through prompting techniques. Researchers vary in how they formulate prompts for particular tasks, particularly with respect to the number of correct examples attached to the prompt (i.e. the value of n in n-shot prompting).

Datasets

[edit]

Typical datasets consist of pairs of questions and correct answers, for example, ("Have the San Jose Sharks won the Stanley Cup?", "No").[146] Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD.[147]

Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: "Alice was friends with Bob. Alice went to visit her friend, ____".[141]

Datasets are of varying quality and may contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality.[152]

Adversarial evaluations

[edit]

LLMs' rapid improvement regularly renders benchmarks obsolete, with the models exceeding the performance of human annotators.[153] In addition, "shortcut learning" allows AIs to "cheat" on multiple-choice tests by using statistical correlations in superficial test question wording to guess the correct responses, without considering the specific question.[127]

Some datasets are adversarial, focusing on problems that confound LLMs. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions that stump LLMs by mimicking falsehoods to which they were exposed during training. For example, an LLM may answer "No" to the question "Can you teach an old dog new tricks?" because of its exposure to the English idiom you can't teach an old dog new tricks, even though this is not literally true.[154]

Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model. The resulting problems are trivial for humans but defeated LLMs. Sample questions:

We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man...

  1. demonstrates how to increase efficient exercise work by running up and down balls.
  2. moves all his arms and legs and builds up a lot of muscle.
  3. then plays the ball and we see a graphics and hedge trimming demonstration.
  4. performs sit ups while on the ball and talking.[155]

BERT selects 2) as the most likely completion, though the correct answer is 4).[155]

Ethical issues

[edit]

In 2023, Nature Biomedical Engineering wrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."[156] Goldman Sachs suggested in 2023 that generative language AI could increase global GDP by 7% in the next ten years, and could expose to automation 300 million jobs globally.[157][158] Brinkmann et al. (2023)[159] also argue that LLMs are transforming processes of cultural evolution by shaping processes of variation, transmission, and selection.

[edit]

Memorization is an emergent behavior in LLMs in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural networks. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates[160] or up to about 7%.[161]

A 2023 study showed that when ChatGPT 3.5 turbo was prompted to repeat the same word indefinitely, after a few hundreds of repetitions, it would start outputting excerpts from its training data.[162]

Security

[edit]

Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse.[163] For example, the availability of large language models could reduce the skill-level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens.[164]

Researchers from Anthropic found that it was possible to create "sleeper agents", models with hidden functionalities that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions. For example, a LLM could produce safe code except on a specific date, or if the prompt contains a specific tag. These functionalities were found to be difficult to detect or remove via safety training.[165]

LLM applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, a 2023 study[166] proposed a method for circumventing LLM safety systems. In 2025, The American Sunlight Project, a non-profit, published a study[167] showing evidence that the so-called Pravda network, a pro-Russia propaganda aggregator, was strategically placing web content through mass publication and duplication with the intention of biasing LLM outputs. The American Sunlight Project coined this technique "LLM grooming", and pointed to it as a new tool of weaponizing AI to spread disinformation and harmful content.[167][168] Similarly, Yongge Wang[169] illustrated in 2024 how a potential criminal could potentially bypass ChatGPT 4o's safety controls to obtain information on establishing a drug trafficking operation. External filters, circuit breakers and overrides have been posed as solutions.[citation needed]

In recent years, the use of LLMs has gained increasing attention in the cybersecurity domain. Several research efforts have explored the potential of LLM-powered systems to detect and respond to cyberattacks, particularly in complex or dynamic environments where traditional rule-based approaches may fall short. Recently, novel methodologies have been proposed that leverage LLMs for tasks such as anomaly detection, phishing recognition, and threat classification.[170]

Prompt injection

[edit]

A problem with the primitive dialog or task format is that users can create messages that appear to come from the assistant or the developer. This may result in some of the model's safeguards being overcome (jailbreaking), a problem called prompt injection. Attempts to remedy this issue include versions of the Chat Markup Language where user input is clearly marked as such, though it is still up to the model to understand the separation between user input and developer prompts.[171] Newer models exhibit some resistance to jailbreaking through separation of user and system prompts.[172]

LLMs still have trouble differentiating user instructions from instructions in content not authored by the user, such as in web pages and uploaded files.[173]

Algorithmic bias

[edit]

While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups.[174] Since English data is overrepresented in current large language models' training data, it may also downplay non-English views.[175]

Stereotyping

[edit]

AI models can reinforce a wide range of stereotypes, including those based on gender, ethnicity, age, nationality, religion, or occupation. This can lead to outputs that homogenize, or unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.[176][177]

Notably, gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. Large language models often assign roles and characteristics based on traditional gender norms.[174] For example, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.[178]

Selection bias

[edit]

Selection bias refers the inherent tendency of large language models to favor certain option identifiers irrespective of the actual content of the options. This bias primarily stems from token bias—that is, the model assigns a higher a priori probability to specific answer tokens (such as "A") when generating responses. As a result, when the ordering of options is altered (for example, by systematically moving the correct answer to different positions), the model’s performance can fluctuate significantly. This phenomenon undermines the reliability of large language models in multiple-choice settings.[179][180]

Political bias

[edit]

Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.[181]

Energy demands

[edit]

The energy demands of LLMs have grown along with their size and capabilities. Data centers that enable LLM training require substantial amounts of electricity. Much of that electricity is generated by non-renewable resources that create greenhouse gases and contribute to climate change.[182] Nuclear power and geothermal energy are two options tech companies are exploring to meet the sizable energy demands of LLM training.[183] The significant expense of investing in geothermal solutions has led to major shale producers like Chevron and Exxon Mobil advocating for tech companies to use electricity produced via natural gas to fuel their large energy demands.[184]

Cognitive impact

[edit]

In 2025, a preliminary study measuring the effects of using LLMs to write essays reported a decrease of neural and linguistic performance from users of ChatGPT over the course of several months.[185]

See also

[edit]

References

[edit]
  1. ^ a b c Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (Dec 2020). Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.F.; Lin, H. (eds.). "Language Models are Few-Shot Learners" (PDF). Advances in Neural Information Processing Systems. 33. Curran Associates, Inc.: 1877–1901. arXiv:2005.14165. Archived (PDF) from the original on 2025-08-04. Retrieved 2025-08-04.
  2. ^ Fathallah, Nadeen; Das, Arunav; De Giorgis, Stefano; Poltronieri, Andrea; Haase, Peter; Kovriguina, Liubov (2025-08-04). NeOn-GPT: A Large Language Model-Powered Pipeline for Ontology Learning (PDF). Extended Semantic Web Conference 2024. Hersonissos, Greece.
  3. ^ Manning, Christopher D. (2022). "Human Language Understanding & Reasoning". Daedalus. 151 (2): 127–138. doi:10.1162/daed_a_01905. S2CID 248377870. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  4. ^ Goodman, Joshua (2025-08-04), A Bit of Progress in Language Modeling, arXiv:cs/0108005, Bibcode:2001cs........8005G
  5. ^ Kilgarriff, Adam; Grefenstette, Gregory (September 2003). "Introduction to the Special Issue on the Web as Corpus". Computational Linguistics. 29 (3): 333–347. doi:10.1162/089120103322711569. ISSN 0891-2017.
  6. ^ Banko, Michele; Brill, Eric (2001). "Scaling to very very large corpora for natural language disambiguation". Proceedings of the 39th Annual Meeting on Association for Computational Linguistics - ACL '01. Morristown, NJ, USA: Association for Computational Linguistics: 26–33. doi:10.3115/1073012.1073017.
  7. ^ Resnik, Philip; Smith, Noah A. (September 2003). "The Web as a Parallel Corpus". Computational Linguistics. 29 (3): 349–380. doi:10.1162/089120103322711578. ISSN 0891-2017. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  8. ^ Xu, Wei; Rudnicky, Alex (2025-08-04). "Can artificial neural networks learn language models?". 6th International Conference on Spoken Language Processing (ICSLP 2000). ISCA: ISCA: vol. 1, 202–205–0. doi:10.21437/icslp.2000-50.
  9. ^ Chen, Leiyu; Li, Shaobo; Bai, Qiang; Yang, Jing; Jiang, Sanlong; Miao, Yanming (2021). "Review of Image Classification Algorithms Based on Convolutional Neural Networks". Remote Sensing. 13 (22): 4712. Bibcode:2021RemS...13.4712C. doi:10.3390/rs13224712.
  10. ^ Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, ?ukasz; Polosukhin, Illia (2017). "Attention is All you Need" (PDF). Advances in Neural Information Processing Systems. 30. Curran Associates, Inc. Archived (PDF) from the original on 2025-08-04. Retrieved 2025-08-04.
  11. ^ Bahdanau, Dzmitry; Cho, Kyunghyun; Bengio, Yoshua (2014). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL].
  12. ^ Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). "A Primer in BERTology: What We Know About How BERT Works". Transactions of the Association for Computational Linguistics. 8: 842–866. arXiv:2002.12327. doi:10.1162/tacl_a_00349. S2CID 211532403. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  13. ^ a b Movva, Rajiv; Balachandar, Sidhika; Peng, Kenny; Agostini, Gabriel; Garg, Nikhil; Pierson, Emma (2024). "Topics, Authors, and Institutions in Large Language Model Research: Trends from 17K arXiv Papers". Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). pp. 1223–1243. arXiv:2307.10700. doi:10.18653/v1/2024.naacl-long.67. Retrieved 2025-08-04.
  14. ^ Hern, Alex (14 February 2019). "New AI fake text generator may be too dangerous to release, say creators". The Guardian. Archived from the original on 14 February 2019. Retrieved 20 January 2024.
  15. ^ "ChatGPT a year on: 3 ways the AI chatbot has completely changed the world in 12 months". Euronews. November 30, 2023. Archived from the original on January 14, 2024. Retrieved January 20, 2024.
  16. ^ Heaven, Will (March 14, 2023). "GPT-4 is bigger and better than ChatGPT—but OpenAI won't say why". MIT Technology Review. Archived from the original on March 17, 2023. Retrieved January 20, 2024.
  17. ^ Metz, Cade (September 12, 2024). "OpenAI Unveils New ChatGPT That Can Reason Through Math and Science". The New York Times. Retrieved September 12, 2024.
  18. ^ "Parameters in notable artificial intelligence systems". ourworldindata.org. November 30, 2023. Retrieved January 20, 2024.
  19. ^ Sharma, Shubham (2025-08-04). "Open-source DeepSeek-R1 uses pure reinforcement learning to match OpenAI o1 — at 95% less cost". VentureBeat. Retrieved 2025-08-04.
  20. ^ Zia, Dr Tehseen (2025-08-04). "Unveiling of Large Multimodal Models: Shaping the Landscape of Language Models in 2024". Unite.AI. Retrieved 2025-08-04.
  21. ^ Peng, Bo; et al. (2023). "RWKV: Reinventing RNNS for the Transformer Era". arXiv:2305.13048 [cs.CL].
  22. ^ Merritt, Rick (2025-08-04). "What Is a Transformer Model?". NVIDIA Blog. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  23. ^ Gu, Albert; Dao, Tri (2025-08-04), Mamba: Linear-Time Sequence Modeling with Selective State Spaces, arXiv:2312.00752
  24. ^ Kaushal, Ayush; Mahowald, Kyle (2025-08-04), What do tokens know about their characters and how do they know it?, arXiv:2206.02608
  25. ^ Yennie Jun (2025-08-04). "All languages are NOT created (tokenized) equal". Language models cost much more in some languages than others. Archived from the original on 2025-08-04. Retrieved 2025-08-04. In other words, to express the same sentiment, some languages require up to 10 times more tokens.
  26. ^ a b Petrov, Aleksandar; Malfa, Emanuele La; Torr, Philip; Bibi, Adel (June 23, 2023). "Language Model Tokenizers Introduce Unfairness Between Languages". NeurIPS. arXiv:2305.15425. Archived from the original on December 15, 2023. Retrieved September 16, 2023 – via openreview.net.
  27. ^ "OpenAI API". platform.openai.com. Archived from the original on April 23, 2023. Retrieved 2025-08-04.
  28. ^ a b Paa?, Gerhard; Giesselbach, Sven (2022). "Pre-trained Language Models". Foundation Models for Natural Language Processing. Artificial Intelligence: Foundations, Theory, and Algorithms. pp. 19–78. doi:10.1007/978-3-031-23190-2_2. ISBN 9783031231902.
  29. ^ Lundberg, Scott (2025-08-04). "The Art of Prompt Design: Prompt Boundaries and Token Healing". Medium. Retrieved 2025-08-04.
  30. ^ Dodge, Jesse; Sap, Maarten; Marasovi?, Ana; Agnew, William; Ilharco, Gabriel; Groeneveld, Dirk; Mitchell, Margaret; Gardner, Matt (2021). "Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus". arXiv:2104.08758 [cs.CL].
  31. ^ Lee, Katherine; Ippolito, Daphne; Nystrom, Andrew; Zhang, Chiyuan; Eck, Douglas; Callison-Burch, Chris; Carlini, Nicholas (May 2022). "Deduplicating Training Data Makes Language Models Better" (PDF). Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 1: Long Papers: 8424–8445. doi:10.18653/v1/2022.acl-long.577.
  32. ^ Li, Yuanzhi; Bubeck, Sébastien; Eldan, Ronen; Del Giorno, Allie; Gunasekar, Suriya; Lee, Yin Tat (2025-08-04), Textbooks Are All You Need II: phi-1.5 technical report, arXiv:2309.05463
  33. ^ Lin, Zhenghao; Gou, Zhibin; Gong, Yeyun; Liu, Xiao; Shen, Yelong; Xu, Ruochen; Lin, Chen; Yang, Yujiu; Jiao, Jian (2025-08-04). "Rho-1: Not All Tokens Are What You Need". arXiv:2404.07965 [cs.CL].
  34. ^ Abdin, Marah; Jacobs, Sam Ade; Awan, Ammar Ahmad; Aneja, Jyoti; Awadallah, Ahmed; Awadalla, Hany; Bach, Nguyen; Bahree, Amit; Bakhtiari, Arash (2025-08-04). "Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone". arXiv:2404.14219 [cs.CL].
  35. ^ Ouyang, Long; Wu, Jeff; et al. (2025-08-04). "Training language models to follow instructions with human feedback". arXiv:2203.02155 [cs.CL].
  36. ^ OpenAI (2025-08-04). "Aligning language models to follow instructions". OpenAI. Retrieved 2025-08-04.
  37. ^ "Foundation Models And LLMs: 19 Real-World, Practical Use Cases". Forbes. 2025-08-04. Retrieved 2025-08-04.
  38. ^ "7 Steps to Mastering Large Language Model Fine-tuning". KDnuggets. Retrieved 2025-08-04.
  39. ^ "From bare metal to a 70B model: infrastructure set-up and scripts". imbue.com. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  40. ^ "metaseq/projects/OPT/chronicles at main · facebookresearch/metaseq". GitHub. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  41. ^ Albrecht, Josh (2025-08-04). "State of the Art: Training >70B LLMs on 10,000 H100 clusters". www.latent.space. Retrieved 2025-08-04.
  42. ^ Maslej, Nestor; Fattorini, Loredana; Brynjolfsson, Erik; Etchemendy, John; Ligett, Katrina; Lyons, Terah; Manyika, James; Ngo, Helen; Niebles, Juan Carlos (2025-08-04), Artificial Intelligence Index Report 2023, arXiv:2310.03715
  43. ^ a b Kaplan, Jared; McCandlish, Sam; Henighan, Tom; Brown, Tom B.; Chess, Benjamin; Child, Rewon; Gray, Scott; Radford, Alec; Wu, Jeffrey; Amodei, Dario (2020). "Scaling Laws for Neural Language Models". arXiv:2001.08361 [cs.LG].
  44. ^ a b c d e Ouyang, Long; Wu, Jeff; Jiang, Xu; Almeida, Diogo; Wainwright, Carroll L.; Mishkin, Pamela; Zhang, Chong; Agarwal, Sandhini; Slama, Katarina; Ray, Alex; Schulman, John; Hilton, Jacob; Kelton, Fraser; Miller, Luke; Simens, Maddie; Askell, Amanda; Welinder, Peter; Christiano, Paul; Leike, Jan; Lowe, Ryan (2022). "Training language models to follow instructions with human feedback". arXiv:2203.02155 [cs.CL].
  45. ^ Edwards, Benj (2025-08-04). "AI gains "values" with Anthropic's new Constitutional AI chatbot approach". Ars Technica. Retrieved 2025-08-04.
  46. ^ Wei, Jason; Bosma, Maarten; Zhao, Vincent Y.; Guu, Kelvin; Yu, Adams Wei; Lester, Brian; Du, Nan; Dai, Andrew M.; Le, Quoc V. (2025-08-04). "Finetuned Language Models Are Zero-Shot Learners". arXiv:2109.01652 [cs.CL].
  47. ^ "A Deep Dive Into the Transformer Architecture – The Development of Transformer Models". KDnuggets. 2025-08-04. Retrieved 2025-08-04.
  48. ^ Allamar, Jay. "Illustrated transformer". Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  49. ^ Allamar, Jay. "The Illustrated GPT-2 (Visualizing Transformer Language Models)". Retrieved 2025-08-04.
  50. ^ "Our next-generation model: Gemini 1.5". Google. 15 February 2024. Archived from the original on 18 February 2024. Retrieved 18 February 2024.
  51. ^ "Long context prompting for Claude 2.1". December 6, 2023. Archived from the original on August 27, 2024. Retrieved January 20, 2024.
  52. ^ "Rate limits". openai.com. Archived from the original on February 2, 2024. Retrieved January 20, 2024.
  53. ^ Zaib, Munazza; Sheng, Quan Z.; Emma Zhang, Wei (4 February 2020). "A Short Survey of Pre-trained Language Models for Conversational AI-A New Age in NLP". Proceedings of the Australasian Computer Science Week Multiconference. pp. 1–4. arXiv:2104.10810. doi:10.1145/3373017.3373028. ISBN 9781450376976. S2CID 211040895.
  54. ^ a b c Jurafsky, Dan; Martin, James H. (7 January 2023). Speech and Language Processing (PDF) (3rd edition draft ed.). Archived (PDF) from the original on 23 March 2023. Retrieved 24 May 2022.
  55. ^ Shazeer, Noam; Mirhoseini, Azalia; Maziarz, Krzysztof; Davis, Andy; Le, Quoc; Hinton, Geoffrey; Dean, Jeff (2025-08-04). "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer". arXiv:1701.06538 [cs.LG].
  56. ^ Lepikhin, Dmitry; Lee, HyoukJoong; Xu, Yuanzhong; Chen, Dehao; Firat, Orhan; Huang, Yanping; Krikun, Maxim; Shazeer, Noam; Chen, Zhifeng (2025-08-04). "GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding". arXiv:2006.16668 [cs.CL].
  57. ^ Dai, Andrew M; Du, Nan (December 9, 2021). "More Efficient In-Context Learning with GLaM". ai.googleblog.com. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  58. ^ Mann, Tobias. "How to run an LLM locally on your PC in less than 10 minutes". www.theregister.com. Retrieved 2025-08-04.
  59. ^ Nagel, Markus; Amjad, Rana Ali; Baalen, Mart Van; Louizos, Christos; Blankevoort, Tijmen (2025-08-04). "Up or Down? Adaptive Rounding for Post-Training Quantization". Proceedings of the 37th International Conference on Machine Learning. PMLR: 7197–7206. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  60. ^ Polino, Antonio; Pascanu, Razvan; Alistarh, Dan (2025-08-04). "Model compression via distillation and quantization". arXiv:1802.05668 [cs.NE].
  61. ^ Frantar, Elias; Ashkboos, Saleh; Hoefler, Torsten; Alistarh, Dan (2025-08-04). "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers". arXiv:2210.17323 [cs.LG].
  62. ^ Grootendorst, Maarten. "A Visual Guide to Quantization". newsletter.maartengrootendorst.com. Archived from the original on 31 Jul 2024. Retrieved 2025-08-04.
  63. ^ Dettmers, Tim; Pagnoni, Artidoro; Holtzman, Ari; Zettlemoyer, Luke (2025-08-04). "QLoRA: Efficient Finetuning of Quantized LLMs". arXiv:2305.14314 [cs.LG].
  64. ^ "Learned Quantization in ml-explore mlx-lm". GitHub.
  65. ^ "What is quantization aware training?". IBM.com. 15 May 2025.
  66. ^ Dettmers, Tim; Svirschevski, Ruslan; Egiazarian, Vage; Kuznedelev, Denis; Frantar, Elias; Ashkboos, Saleh; Borzunov, Alexander; Hoefler, Torsten; Alistarh, Dan (2025-08-04). "SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression". arXiv:2306.03078 [cs.CL].
  67. ^ "Unsloth Dynamic 2.0 GGUFs".
  68. ^ Wang, Yizhong; Kordi, Yeganeh; Mishra, Swaroop; Liu, Alisa; Smith, Noah A.; Khashabi, Daniel; Hajishirzi, Hannaneh (2022). "Self-Instruct: Aligning Language Model with Self Generated Instructions". arXiv:2212.10560 [cs.CL].
  69. ^ "Introducing ChatGPT". openai.com. 13 March 2024.
  70. ^ "OpenAI Platform". platform.openai.com.
  71. ^ "Giving Claude a role with a system prompt". Anthropic.
  72. ^ Lewis, Patrick; Perez, Ethan; Piktus, Aleksandra; Petroni, Fabio; Karpukhin, Vladimir; Goyal, Naman; Küttler, Heinrich; Lewis, Mike; Yih, Wen-tau; Rockt?schel, Tim; Riedel, Sebastian; Kiela, Douwe (2020). "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks". Advances in Neural Information Processing Systems. 33. Curran Associates, Inc.: 9459–9474. arXiv:2005.11401. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  73. ^ Dickson, Ben (2025-08-04). "The tool integration problem that's holding back enterprise AI (and how CoTools solves it)". VentureBeat. Retrieved 2025-08-04.
  74. ^ Liang, Yaobo; Wu, Chenfei; Song, Ting; Wu, Wenshan; Xia, Yan; Liu, Yu; Ou, Yang; Lu, Shuai; Ji, Lei; Mao, Shaoguang; Wang, Yun; Shou, Linjun; Gong, Ming; Duan, Nan (2025-08-04). "TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs". arXiv:2303.16434 [cs.AI].
  75. ^ Patil, Shishir G.; Zhang, Tianjun; Wang, Xin; Gonzalez, Joseph E. (2025-08-04). "Gorilla: Large Language Model Connected with Massive APIs". arXiv:2305.15334 [cs.CL].
  76. ^ "ChatGPT-AutoExpert/_system-prompts/all_tools.md at 835baae768870aa9747663c24d8216820d24fd74 · spdustin/ChatGPT-AutoExpert". GitHub.
  77. ^ "Core Concepts: Long-term Memory in LLM Applications". langchain-ai.github.io.
  78. ^ Wang, Lei; Ma, Chen; Feng, Xueyang; Zhang, Zeyu; Yang, Hao; Zhang, Jingsen; Chen, Zhiyuan; Tang, Jiakai; Chen, Xu; Lin, Yankai; Zhao, Wayne Xin; Wei, Zhewei; Wen, Jirong (December 2024). "A survey on large language model based autonomous agents". Frontiers of Computer Science. 18 (6) 186345. arXiv:2308.11432. doi:10.1007/s11704-024-40231-1.
  79. ^ Yao, Shunyu; Zhao, Jeffrey; Yu, Dian; Du, Nan; Shafran, Izhak; Narasimhan, Karthik; Cao, Yuan (2025-08-04). "ReAct: Synergizing Reasoning and Acting in Language Models". arXiv:2210.03629 [cs.CL].
  80. ^ Wang, Zihao; Cai, Shaofei; Liu, Anji; Ma, Xiaojian; Liang, Yitao (2025-08-04). "Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents". arXiv:2302.01560 [cs.AI].
  81. ^ a b Shinn, Noah; Cassano, Federico; Labash, Beck; Gopinath, Ashwin; Narasimhan, Karthik; Yao, Shunyu (2025-08-04). "Reflexion: Language Agents with Verbal Reinforcement Learning". arXiv:2303.11366 [cs.AI].
  82. ^ Hao, Shibo; Gu, Yi; Ma, Haodi; Jiahua Hong, Joshua; Wang, Zhen; Zhe Wang, Daisy; Hu, Zhiting (2025-08-04). "Reasoning with Language Model is Planning with World Model". arXiv:2305.14992 [cs.CL].
  83. ^ Zhang, Jenny; Lehman, Joel; Stanley, Kenneth; Clune, Jeff (2 June 2023). "OMNI: Open-endedness via Models of human Notions of Interestingness". arXiv:2306.01711 [cs.AI].
  84. ^ a b "Voyager | An Open-Ended Embodied Agent with Large Language Models". voyager.minedojo.org. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  85. ^ Park, Joon Sung; O'Brien, Joseph C.; Cai, Carrie J.; Ringel Morris, Meredith; Liang, Percy; Bernstein, Michael S. (2025-08-04). "Generative Agents: Interactive Simulacra of Human Behavior". arXiv:2304.03442 [cs.HC].
  86. ^ a b Wei, Jason; Wang, Xuezhi; Schuurmans, Dale; Bosma, Maarten; Ichter, Brian; Xia, Fei; Chi, Ed; Le, Quoc; Zhou, Denny (2025-08-04), Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, arXiv:2201.11903
  87. ^ Wu, Tongshuang; Jiang, Ellen; Donsbach, Aaron; Gray, Jeff; Molina, Alejandra; Terry, Michael; Cai, Carrie J. (2025-08-04), PromptChainer: Chaining Large Language Model Prompts through Visual Programming, arXiv:2203.06566
  88. ^ "What is prompt chaining?". IBM. 23 April 2024.
  89. ^ "What is chain of thought (CoT) prompting?". IBM. 23 April 2025.
  90. ^ Schreiner, Maximilian (2025-08-04). "Deeper insights into AI language models - chain of thought prompting as a success factor". The Decoder. Retrieved 2025-08-04.
  91. ^ a b "Introducing OpenAI o1-preview". OpenAI. 2025-08-04. Retrieved 2025-08-04.
  92. ^ a b Metz, Cade (2025-08-04). "OpenAI Unveils New A.I. That Can 'Reason' Through Math and Science Problems". The New York Times. Retrieved 2025-08-04.
  93. ^ Gibney, Elizabeth (2025-08-04). "China's cheap, open AI model DeepSeek thrills scientists". Nature. Retrieved 2025-08-04.
  94. ^ Kiros, Ryan; Salakhutdinov, Ruslan; Zemel, Rich (2025-08-04). "Multimodal Neural Language Models". Proceedings of the 31st International Conference on Machine Learning. PMLR: 595–603. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  95. ^ Driess, Danny; Xia, Fei; Sajjadi, Mehdi S. M.; Lynch, Corey; Chowdhery, Aakanksha; Ichter, Brian; Wahid, Ayzaan; Tompson, Jonathan; Vuong, Quan; Yu, Tianhe; Huang, Wenlong; Chebotar, Yevgen; Sermanet, Pierre; Duckworth, Daniel; Levine, Sergey (2025-08-04). "PaLM-E: An Embodied Multimodal Language Model". arXiv:2303.03378 [cs.LG].
  96. ^ Liu, Haotian; Li, Chunyuan; Wu, Qingyang; Lee, Yong Jae (2025-08-04). "Visual Instruction Tuning". arXiv:2304.08485 [cs.CV].
  97. ^ Zhang, Hang; Li, Xin; Bing, Lidong (2025-08-04). "Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding". arXiv:2306.02858 [cs.CL].
  98. ^ "OpenAI says natively multimodal GPT-4o eats text, visuals, sound – and emits the same". The Register. 2025-08-04.
  99. ^ Zia, Dr Tehseen (2025-08-04). "Unveiling of Large Multimodal Models: Shaping the Landscape of Language Models in 2024". Unite.AI. Retrieved 2025-08-04.
  100. ^ Li, Junnan; Li, Dongxu; Savarese, Silvio; Hoi, Steven (2025-08-04). "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models". arXiv:2301.12597 [cs.CV].
  101. ^ Alayrac, Jean-Baptiste; Donahue, Jeff; Luc, Pauline; Miech, Antoine; Barr, Iain; Hasson, Yana; Lenc, Karel; Mensch, Arthur; Millican, Katherine; Reynolds, Malcolm; Ring, Roman; Rutherford, Eliza; Cabi, Serkan; Han, Tengda; Gong, Zhitao (2025-08-04). "Flamingo: a Visual Language Model for Few-Shot Learning". Advances in Neural Information Processing Systems. 35: 23716–23736. arXiv:2204.14198. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  102. ^ Finnie-Ansley, James; Denny, Paul; Becker, Brett A.; Luxton-Reilly, Andrew; Prather, James (14 February 2022). "The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming". Australasian Computing Education Conference. ACE '22. New York, NY, USA: Association for Computing Machinery. pp. 10–19. doi:10.1145/3511861.3511863. ISBN 978-1-4503-9643-1. S2CID 246681316.
  103. ^ Husein, Rasha Ahmad; Aburajouh, Hala; Catal, Cagatay (March 2025). "Large language models for code completion: A systematic literature review". Computer Standards & Interfaces. 92 103917. doi:10.1016/j.csi.2024.103917.
  104. ^ Weissenow, Konstantin; Rost, Burkhard (April 2025). "Are protein language models the new universal key?". Current Opinion in Structural Biology. 91 102997. doi:10.1016/j.sbi.2025.102997. PMID 39921962.
  105. ^ Lin, Zeming; Akin, Halil; Rao, Roshan; Hie, Brian; Zhu, Zhongkai; Lu, Wenting; Smetanin, Nikita; Verkuil, Robert; Kabeli, Ori; Shmueli, Yaniv; dos Santos Costa, Allan; Fazel-Zarandi, Maryam; Sercu, Tom; Candido, Salvatore; Rives, Alexander (17 March 2023). "Evolutionary-scale prediction of atomic-level protein structure with a language model". Science. 379 (6637): 1123–1130. Bibcode:2023Sci...379.1123L. bioRxiv 10.1101/2022.07.20.500902. doi:10.1126/science.ade2574. PMID 36927031.
  106. ^ "ESM Metagenomic Atlas | Meta AI". esmatlas.com.
  107. ^ Hayes, Thomas; Rao, Roshan; Akin, Halil; Sofroniew, Nicholas J.; Oktay, Deniz; Lin, Zeming; Verkuil, Robert; Tran, Vincent Q.; Deaton, Jonathan; Wiggert, Marius; Badkundri, Rohil; Shafkat, Irhum; Gong, Jun; Derry, Alexander; Molina, Raul S.; Thomas, Neil; Khan, Yousuf A.; Mishra, Chetan; Kim, Carolyn; Bartie, Liam J.; Nemeth, Matthew; Hsu, Patrick D.; Sercu, Tom; Candido, Salvatore; Rives, Alexander (21 February 2025). "Simulating 500 million years of evolution with a language model". Science. 387 (6736): 850–858. Bibcode:2025Sci...387..850H. doi:10.1126/science.ads0018. PMID 39818825.
  108. ^ Fishman, Veniamin; Kuratov, Yuri; Shmelev, Aleksei; Petrov, Maxim; Penzar, Dmitry; Shepelin, Denis; Chekanov, Nikolay; Kardymon, Olga; Burtsev, Mikhail (11 January 2025). "GENA-LM: a family of open-source foundational DNA language models for long sequences". Nucleic Acids Research. 53 (2): gkae1310. doi:10.1093/nar/gkae1310. PMC 11734698. PMID 39817513.
  109. ^ Wang, Ning; Bian, Jiang; Li, Yuchen; Li, Xuhong; Mumtaz, Shahid; Kong, Linghe; Xiong, Haoyi (13 May 2024). "Multi-purpose RNA language modelling with motif-aware pretraining and type-guided fine-tuning". Nature Machine Intelligence. 6 (5): 548–557. doi:10.1038/s42256-024-00836-4.
  110. ^ Hoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur; Buchatskaya, Elena; Cai, Trevor; Rutherford, Eliza; Casas, Diego de Las; Hendricks, Lisa Anne; Welbl, Johannes; Clark, Aidan; Hennigan, Tom; Noland, Eric; Millican, Katie; Driessche, George van den; Damoc, Bogdan (2025-08-04). "Training Compute-Optimal Large Language Models". arXiv:2203.15556 [cs.CL].
  111. ^ a b Caballero, Ethan; Gupta, Kshitij; Rish, Irina; Krueger, David (2022). "Broken Neural Scaling Laws". arXiv:2210.14891 [cs.LG].
  112. ^ a b Wei, Jason; Tay, Yi; Bommasani, Rishi; Raffel, Colin; Zoph, Barret; Borgeaud, Sebastian; Yogatama, Dani; Bosma, Maarten; Zhou, Denny; Metzler, Donald; Chi, Ed H.; Hashimoto, Tatsunori; Vinyals, Oriol; Liang, Percy; Dean, Jeff; Fedus, William (31 August 2022). "Emergent Abilities of Large Language Models". Transactions on Machine Learning Research. ISSN 2835-8856. Archived from the original on 22 March 2023. Retrieved 19 March 2023.
  113. ^ "137 emergent abilities of large language models". Jason Wei. Retrieved 2025-08-04.
  114. ^ Bowman, Samuel R. (2023). "Eight Things to Know about Large Language Models". arXiv:2304.00612 [cs.CL].
  115. ^ Mukherjee, Anirban; Chang, Hannah (2024). "Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption". arXiv:2403.09404 [cs.AI].
  116. ^ Hahn, Michael; Goyal, Navin (2025-08-04). "A Theory of Emergent In-Context Learning as Implicit Structure Induction". arXiv:2303.07971 [cs.LG].
  117. ^ Pilehvar, Mohammad Taher; Camacho-Collados, Jose (June 2019). "Proceedings of the 2019 Conference of the North". Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, Minnesota: Association for Computational Linguistics: 1267–1273. doi:10.18653/v1/N19-1128. S2CID 102353817. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  118. ^ "WiC: The Word-in-Context Dataset". pilehvar.github.io. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  119. ^ Patel, Roma; Pavlick, Ellie (2025-08-04). "Mapping Language Models to Grounded Conceptual Spaces". ICLR. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  120. ^ A Closer Look at Large Language Models Emergent Abilities Archived 2025-08-04 at the Wayback Machine (Yao Fu, Nov 20, 2022)
  121. ^ Ornes, Stephen (March 16, 2023). "The Unpredictable Abilities Emerging From Large AI Models". Quanta Magazine. Archived from the original on March 16, 2023. Retrieved March 16, 2023.
  122. ^ Schaeffer, Rylan; Miranda, Brando; Koyejo, Sanmi (2025-08-04). "Are Emergent Abilities of Large Language Models a Mirage?". arXiv:2304.15004 [cs.AI].
  123. ^ Blank, Idan A. (November 2023). "What are large language models supposed to model?". Trends in Cognitive Sciences. 27 (11): 987–989. doi:10.1016/j.tics.2023.08.006. PMID 37659920.
  124. ^ Nanda, Neel; Chan, Lawrence; Lieberum, Tom; Smith, Jess; Steinhardt, Jacob (2025-08-04). "Progress measures for grokking via mechanistic interpretability". arXiv:2301.05217 [cs.LG].
  125. ^ Ananthaswamy, Anil (2025-08-04). "How Do Machines 'Grok' Data?". Quanta Magazine. Retrieved 2025-08-04.
  126. ^ "On the Biology of a Large Language Model". Transformer Circuits. Retrieved 2025-08-04.
  127. ^ a b c d e Mitchell, Melanie; Krakauer, David C. (28 March 2023). "The debate over understanding in AI's large language models". Proceedings of the National Academy of Sciences. 120 (13): e2215907120. arXiv:2210.13966. Bibcode:2023PNAS..12015907M. doi:10.1073/pnas.2215907120. PMC 10068812. PMID 36943882.
  128. ^ Metz, Cade (16 May 2023). "Microsoft Says New A.I. Shows Signs of Human Reasoning". The New York Times.
  129. ^ a b Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL].
  130. ^ "Anthropic CEO Dario Amodei pens a smart look at our AI future". Fast Company. October 17, 2024.
  131. ^ "ChatGPT is more like an 'alien intelligence' than a human brain, says futurist". ZDNET. 2023. Archived from the original on 12 June 2023. Retrieved 12 June 2023.
  132. ^ a b Newport, Cal (13 April 2023). "What Kind of Mind Does ChatGPT Have?". The New Yorker. Archived from the original on 12 June 2023. Retrieved 12 June 2023.
  133. ^ Roose, Kevin (30 May 2023). "Why an Octopus-like Creature Has Come to Symbolize the State of A.I." The New York Times. Archived from the original on 30 May 2023. Retrieved 12 June 2023.
  134. ^ "The A to Z of Artificial Intelligence". Time Magazine. 13 April 2023. Archived from the original on 16 June 2023. Retrieved 12 June 2023.
  135. ^ Ji, Ziwei; Lee, Nayeon; Frieske, Rita; Yu, Tiezheng; Su, Dan; Xu, Yan; Ishii, Etsuko; Bang, Yejin; Dai, Wenliang; Madotto, Andrea; Fung, Pascale (November 2022). "Survey of Hallucination in Natural Language Generation" (pdf). ACM Computing Surveys. 55 (12). Association for Computing Machinery: 1–38. arXiv:2202.03629. doi:10.1145/3571730. S2CID 246652372. Archived from the original on 26 March 2023. Retrieved 15 January 2023.
  136. ^ Varshney, Neeraj; Yao, Wenlin; Zhang, Hongming; Chen, Jianshu; Yu, Dong (2023). "A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation". arXiv:2307.03987 [cs.CL].
  137. ^ Lin, Belle (2025-08-04). "Why Amazon is Betting on 'Automated Reasoning' to Reduce AI's Hallucinations: The tech giant says an obscure field that combines AI and math can mitigate—but not completely eliminate—AI's propensity to provide wrong answers". Wall Street Journal. ISSN 0099-9660.
  138. ^ Lakoff, George (1999). Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Philosophy; Appendix: The Neural Theory of Language Paradigm. New York Basic Books. pp. 569–583. ISBN 978-0-465-05674-3.
  139. ^ Evans, Vyvyan. (2014). The Language Myth. Cambridge University Press. ISBN 978-1-107-04396-1.
  140. ^ Friston, Karl J. (2022). Active Inference: The Free Energy Principle in Mind, Brain, and Behavior; Chapter 4 The Generative Models of Active Inference. The MIT Press. ISBN 978-0-262-36997-8.
  141. ^ a b Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (Dec 2020). Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.F.; Lin, H. (eds.). "Language Models are Few-Shot Learners" (PDF). Advances in Neural Information Processing Systems. 33. Curran Associates, Inc.: 1877–1901. Archived (PDF) from the original on 2025-08-04. Retrieved 2025-08-04.
  142. ^ a b Huyen, Chip (October 18, 2019). "Evaluation Metrics for Language Modeling". The Gradient. Retrieved January 14, 2024.
  143. ^ Edwards, Benj (2025-08-04). "AI language models can exceed PNG and FLAC in lossless compression, says study". Ars Technica. Retrieved 2025-08-04.
  144. ^ openai/simple-evals, OpenAI, 2025-08-04, retrieved 2025-08-04
  145. ^ openai/evals, OpenAI, 2025-08-04, archived from the original on 2025-08-04, retrieved 2025-08-04
  146. ^ a b Clark, Christopher; Lee, Kenton; Chang, Ming-Wei; Kwiatkowski, Tom; Collins, Michael; Toutanova, Kristina (2019). "BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions". arXiv:1905.10044 [cs.CL].
  147. ^ a b c Wayne Xin Zhao; Zhou, Kun; Li, Junyi; Tang, Tianyi; Wang, Xiaolei; Hou, Yupeng; Min, Yingqian; Zhang, Beichen; Zhang, Junjie; Dong, Zican; Du, Yifan; Yang, Chen; Chen, Yushuo; Chen, Zhipeng; Jiang, Jinhao; Ren, Ruiyang; Li, Yifan; Tang, Xinyu; Liu, Zikang; Liu, Peiyu; Nie, Jian-Yun; Wen, Ji-Rong (2023). "A Survey of Large Language Models". arXiv:2303.18223 [cs.CL].
  148. ^ Nangia, Nikita and Vania, Clara and Bhalerao, Rasika and Bowman, Samuel R. (November 2020). "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models". In Webber, Bonnie and Cohn, Trevor and He, Yulan and Liu, Yang (ed.). Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. pp. 1953–1967. arXiv:2010.00133. doi:10.18653/v1/2020.emnlp-main.154.{{cite conference}}: CS1 maint: multiple names: authors list (link)
  149. ^ Nadeem, Moin and Bethke, Anna and Reddy, Siva (August 2021). "StereoSet: Measuring stereotypical bias in pretrained language models". In Zong, Chengqing and Xia, Fei and Li, Wenjie and Navigli, Roberto (ed.). Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics. pp. 5356–5371. arXiv:2004.09456. doi:10.18653/v1/2021.acl-long.416.{{cite conference}}: CS1 maint: multiple names: authors list (link)
  150. ^ Simpson, Shmona and Nukpezah, Jonathan and Kie Brooks and Pandya, Raaghav (17 December 2024). "Parity benchmark for measuring bias in LLMs". AI and Ethics. 5 (3). Springer: 3087–3101. doi:10.1007/s43681-024-00613-4.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  151. ^ Caramancion, Kevin Matthe (2025-08-04). "News Verifiers Showdown: A Comparative Performance Evaluation of ChatGPT 3.5, ChatGPT 4.0, Bing AI, and Bard in News Fact-Checking". 2023 IEEE Future Networks World Forum (FNWF). IEEE. pp. 1–6. arXiv:2306.17176. doi:10.1109/FNWF58287.2023.10520446. ISBN 979-8-3503-2458-7.
  152. ^ "Sanitized open-source datasets for natural language and code understanding: how we evaluated our 70B model". imbue.com. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  153. ^ Srivastava, Aarohi; et al. (2022). "Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models". arXiv:2206.04615 [cs.CL].
  154. ^ Lin, Stephanie; Hilton, Jacob; Evans, Owain (2021). "TruthfulQA: Measuring How Models Mimic Human Falsehoods". arXiv:2109.07958 [cs.CL].
  155. ^ a b Zellers, Rowan; Holtzman, Ari; Bisk, Yonatan; Farhadi, Ali; Choi, Yejin (2019). "HellaSwag: Can a Machine Really Finish Your Sentence?". arXiv:1905.07830 [cs.CL].
  156. ^ "Prepare for truly useful large language models". Nature Biomedical Engineering. 7 (2): 85–86. 7 March 2023. doi:10.1038/s41551-023-01012-6. PMID 36882584. S2CID 257403466.
  157. ^ "Your job is (probably) safe from artificial intelligence". The Economist. 7 May 2023. Archived from the original on 17 June 2023. Retrieved 18 June 2023.
  158. ^ "Generative AI Could Raise Global GDP by 7%". Goldman Sachs. Archived from the original on 18 June 2023. Retrieved 18 June 2023.
  159. ^ Brinkmann, Levin; Baumann, Fabian; Bonnefon, Jean-Fran?ois; Derex, Maxime; Müller, Thomas F.; Nussberger, Anne-Marie; Czaplicka, Agnieszka; Acerbi, Alberto; Griffiths, Thomas L.; Henrich, Joseph; Leibo, Joel Z.; McElreath, Richard; Oudeyer, Pierre-Yves; Stray, Jonathan; Rahwan, Iyad (2025-08-04). "Machine culture". Nature Human Behaviour. 7 (11): 1855–1868. arXiv:2311.11388. doi:10.1038/s41562-023-01742-2. ISSN 2397-3374. PMID 37985914.
  160. ^ Peng, Zhencan; Wang, Zhizhi; Deng, Dong (13 June 2023). "Near-Duplicate Sequence Search at Scale for Large Language Model Memorization Evaluation" (PDF). Proceedings of the ACM on Management of Data. 1 (2): 1–18. doi:10.1145/3589324. S2CID 259213212. Archived (PDF) from the original on 2025-08-04. Retrieved 2025-08-04. Citing Lee et al 2022.
  161. ^ Peng, Wang & Deng 2023, p. 8.
  162. ^ Stephen Council (1 Dec 2023). "How Googlers cracked an SF rival's tech model with a single word". SFGATE. Archived from the original on 16 December 2023.
  163. ^ Alba, Davey (1 May 2023). "AI chatbots have been used to create dozens of news content farms". The Japan Times. Retrieved 18 June 2023.
  164. ^ "Could chatbots help devise the next pandemic virus?". Science. 14 June 2023. doi:10.1126/science.adj2463. Archived from the original on 18 June 2023. Retrieved 18 June 2023.
  165. ^ Edwards, Benj (2025-08-04). "AI poisoning could turn models into destructive "sleeper agents," says Anthropic". Ars Technica. Retrieved 2025-08-04.
  166. ^ Kang, Daniel (2023). "Exploiting programmatic behavior of LLMs: Dual-use through standard security attacks". arXiv:2302.05733 [cs.CR].
  167. ^ a b "Russian propaganda may be flooding AI models". The American Sunlight Project. 26 February 2025. Retrieved 2025-08-04.
  168. ^ Goudarzi, Sara (2025-08-04). "Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots". Bulletin of the Atomic Scientists. Retrieved 2025-08-04.
  169. ^ Wang, Yongge (20 June 2024). "Encryption Based Covert Channel for Large Language Models" (PDF). IACR ePrint 2024/586. Archived (PDF) from the original on 24 June 2024. Retrieved 24 June 2024.
  170. ^ Blefari, Francesco; Cosentino, Cristian; Pironti, Francesco Aurelio; Furfaro, Angelo; Marozzo, Fabrizio (2025-08-04), CyberRAG: An agentic RAG cyber attack classification and reporting tool, arXiv:2507.02424
  171. ^ "openai-python/chatml.md at v0.27.6 · openai/openai-python". GitHub.
  172. ^ Douglas, Will (March 3, 2023). "The inside story of how ChatGPT was built from the people who made it". MIT Technology Review. Archived from the original on March 3, 2023. Retrieved March 6, 2023.
  173. ^ Greshake, Kai; Abdelnabi, Sahar; Mishra, Shailesh; Endres, Christoph; Holz, Thorsten; Fritz, Mario (2025-08-04). "Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection". arXiv:2302.12173 [cs.CR].
  174. ^ a b Xu, Weijie; Wang, Yiwen; Xue, Chi; Hu, Xiangkun; Fang, Xi; Dong, Guimin; Reddy, Chandan K. (2025-08-04). "Quantifying Fairness in LLMs Beyond Tokens: A Semantic and Statistical Perspective". arXiv:2506.19028v1 [cs.CL].
  175. ^ Luo, Queenie; Puett, Michael J.; Smith, Michael D. (2025-08-04). "A Perspectival Mirror of the Elephant: Investigating Language Bias on Google, ChatGPT, Wikipedia, and YouTube". arXiv:2303.16281v2 [cs.CY].
  176. ^ Wang, Angelina; Morgenstern, Jamie; Dickerson, John P. (17 February 2025). "Large language models that replace human participants can harmfully misportray and flatten identity groups". Nature Machine Intelligence. 7 (3): 400–411. arXiv:2402.01908. doi:10.1038/s42256-025-00986-z.
  177. ^ Cheng, Myra; Durmus, Esin; Jurafsky, Dan (2025-08-04), Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models, arXiv:2305.18189
  178. ^ Kotek, Hadas; Dockum, Rikker; Sun, David (2025-08-04). "Gender bias and stereotypes in Large Language Models". Proceedings of the ACM Collective Intelligence Conference. CI '23. New York, NY, USA: Association for Computing Machinery. pp. 12–24. arXiv:2308.14921. doi:10.1145/3582269.3615599. ISBN 979-8-4007-0113-9.
  179. ^ Choi, Hyeong Kyu; Xu, Weijie; Xue, Chi; Eckman, Stephanie; Reddy, Chandan K. (2025-08-04), Mitigating Selection Bias with Node Pruning and Auxiliary Options, arXiv:2409.18857
  180. ^ Zheng, Chujie; Zhou, Hao; Meng, Fandong; Zhou, Jie; Huang, Minlie (2025-08-04), Large Language Models Are Not Robust Multiple Choice Selectors, arXiv:2309.03882
  181. ^ Heikkil?, Melissa (August 7, 2023). "AI language models are rife with different political biases". MIT Technology Review. Retrieved 2025-08-04.
  182. ^ Mehta, Sourabh (2025-08-04). "How Much Energy Do LLMs Consume? Unveiling the Power Behind AI". Association of Data Scientists. Retrieved 2025-08-04.
  183. ^ "Artificial Intelligence wants to go nuclear. Will it work?". NPR. Retrieved 2025-08-04.
  184. ^ Roy, Dareen (December 19, 2024). "AI's energy hunger fuels geothermal startups but natgas rivalry clouds future". Reuters.
  185. ^ Kosmyna, Nataliya; Hauptmann, Eugene; Yuan, Ye Tong; Situ, Jessica; Liao, Xian-Hao; Beresnitzky, Ashly Vivian; Braunstein, Iris; Maes, Pattie (June 10, 2025), Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, arXiv, doi:10.48550/arXiv.2506.08872, arXiv:2506.08872, retrieved August 3, 2025

Further reading

[edit]
相濡以沫不如相忘于江湖是什么意思 四眼狗有什么迷信说法 白斩鸡是什么意思 什么是室性早搏 什么文什么字
省长是什么级别 异食癖是什么意思 今年67岁属什么生肖 出汗太多会对身体造成什么伤害 塞屁股的退烧药叫什么
什么是抑郁 全身皮肤痒是什么原因 脚冰凉吃什么药 迅雷不及掩耳之势是什么意思 九孔藕和七孔藕有什么区别
幽冥是什么意思 葡萄糖为什么叫葡萄糖 唐筛21三体临界风险是什么意思 譬如是什么意思 什么蛇最厉害
718是什么星座hcv9jop6ns6r.cn 口若悬河是指什么生肖hcv8jop9ns2r.cn 肝气郁结是什么意思hcv7jop7ns4r.cn w代表什么意思aiwuzhiyu.com c肽是什么意思hcv9jop4ns8r.cn
湿疹吃什么药好hcv9jop1ns7r.cn 做梦梦见屎是什么意思hcv8jop3ns6r.cn 速写男装属于什么档次hcv9jop5ns3r.cn 鹊桥是什么意思hcv9jop4ns7r.cn 鸡属于什么类动物hcv9jop7ns4r.cn
怀孕是什么症状hcv8jop6ns9r.cn 心衰有什么症状hcv8jop2ns8r.cn 狗脊是什么东西hcv9jop6ns3r.cn 脚筋疼是什么原因hcv7jop5ns0r.cn 做b超能查出什么hcv9jop1ns8r.cn
女生肾疼是什么原因hcv8jop7ns8r.cn 右小指麻木是什么征兆hcv7jop9ns9r.cn 千锤百炼什么意思hcv9jop3ns8r.cn 纤维灶是什么意思hcv7jop5ns1r.cn 什么是省控线hcv9jop5ns9r.cn
百度