首页 英文研究报告文章正文

【英文】世界银行报告:新兴技术策展系列5-生成型人工智能(38页)

英文研究报告 2023年07月25日 09:44 管理员

Progress in generative AI has continued rapidly, fueled by the availability of more-extensive,  more-diverse data sets, better algorithms, and more-powerful computer hardware. Generative  AI is used for many applications, including image and video synthesis, speech synthesis, and  language generation. It remains an active research area, with new models and applications  being developed constantly.  In 2017, the transformer model,6 —a groundbreaking method in the field of natural language  processing—was proposed. Large language models (LLMs) such as GPT3, RoBERT, Gopher,  and BERT started to gain widespread popularity and adoption.LLMs are a type of neural  network model that are called LLMs because of their size. A language model usually consists  of hundreds of billions of parameters. 

Because of the model’s size, it can learn about complex  relationships between words and phrases in the input text. For example, BERT had about 340  million parameters. OpenAI’s GPT-2 (introduced in 2019) has 1.5 billion parameters, and GPT-3  (introduced in 2020) has 175 billion. The size of these models determines their quality. A  model with many parameters allows things to be done that could not be done before.8  These  large models have achieved state-of-the-art performance on a wide range of natural language  processing tasks (Figure 2). The natural language processing tasks encompass sentiment  analysis, question answering, text summarization, text classification, text generation, and more.  Since the initial development of LLMs, technology companies have developed super-LLMs.

【英文】世界银行报告:新兴技术策展系列5-生成型人工智能(38页)

文件下载
资源名称:【英文】世界银行报告:新兴技术策展系列5-生成型人工智能(38页)


标签: 英文报告下载

并购家 关于我们   意见反馈   免责声明 网站地图 京ICP备12009579号-9

分享

复制链接

ipoipocn@163.com

发送邮件
电子邮件为本站唯一联系方式