Libros bestsellers hasta 50% dcto  Ver más

menu

0
  • argentina
  • chile
  • colombia
  • españa
  • méxico
  • perú
  • estados unidos
  • internacional
portada Foundation Models for Natural Language Processing: Pre-Trained Language Models Integrating Media (in English)
Type
Physical Book
Publisher
Language
English
Pages
436
Format
Hardcover
Dimensions
23.4 x 15.6 x 2.5 cm
Weight
0.81 kg.
ISBN13
9783031231896

Foundation Models for Natural Language Processing: Pre-Trained Language Models Integrating Media (in English)

Gerhard Paaß (Author) · Sven Giesselbach (Author) · Springer · Hardcover

Foundation Models for Natural Language Processing: Pre-Trained Language Models Integrating Media (in English) - Paaß, Gerhard ; Giesselbach, Sven

Physical Book

$ 56.83

$ 59.99

You save: $ 3.16

5% discount
  • Condition: New
It will be shipped from our warehouse between Monday, May 20 and Tuesday, May 21.
You will receive it anywhere in United States between 1 and 3 business days after shipment.

Synopsis "Foundation Models for Natural Language Processing: Pre-Trained Language Models Integrating Media (in English)"

This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts. Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models. After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.

Customers reviews

More customer reviews
  • 0% (0)
  • 0% (0)
  • 0% (0)
  • 0% (0)
  • 0% (0)

Frequently Asked Questions about the Book

All books in our catalog are Original.
The book is written in English.
The binding of this edition is Hardcover.

Questions and Answers about the Book

Do you have a question about the book? Login to be able to add your own question.

Opinions about Bookdelivery

More customer reviews