2023-07-24 8 英文报告下载
Progress in generative AI has continued rapidly, fueled by the availability of more-extensive, more-diverse data sets, better algorithms, and more-powerful computer hardware. Generative AI is used for many applications, including image and video synthesis, speech synthesis, and language generation. It remains an active research area, with new models and applications being developed constantly. In 2017, the transformer model,6 —a groundbreaking method in the field of natural language processing—was proposed. Large language models (LLMs) such as GPT3, RoBERT, Gopher, and BERT started to gain widespread popularity and adoption.LLMs are a type of neural network model that are called LLMs because of their size. A language model usually consists of hundreds of billions of parameters.
Because of the model’s size, it can learn about complex relationships between words and phrases in the input text. For example, BERT had about 340 million parameters. OpenAI’s GPT-2 (introduced in 2019) has 1.5 billion parameters, and GPT-3 (introduced in 2020) has 175 billion. The size of these models determines their quality. A model with many parameters allows things to be done that could not be done before.8 These large models have achieved state-of-the-art performance on a wide range of natural language processing tasks (Figure 2). The natural language processing tasks encompass sentiment analysis, question answering, text summarization, text classification, text generation, and more. Since the initial development of LLMs, technology companies have developed super-LLMs.