Libros bestsellers hasta 50% dcto  Ver más

menu

0
  • argentina
  • chile
  • colombia
  • españa
  • méxico
  • perú
  • estados unidos
  • internacional
portada Pretrained Transformers for Text Ranking: Bert and Beyond (in English)
Type
Physical Book
Publisher
Language
English
Pages
307
Format
Paperback
Dimensions
23.5 x 19.1 x 1.8 cm
Weight
0.57 kg.
ISBN13
9783031010538

Pretrained Transformers for Text Ranking: Bert and Beyond (in English)

Jimmy Lin (Author) · Rodrigo Nogueira (Author) · Andrew Yates (Author) · Springer · Paperback

Pretrained Transformers for Text Ranking: Bert and Beyond (in English) - Lin, Jimmy ; Nogueira, Rodrigo ; Yates, Andrew

Physical Book

$ 85.25

$ 89.99

You save: $ 4.74

5% discount
  • Condition: New
It will be shipped from our warehouse between Friday, May 31 and Monday, June 03.
You will receive it anywhere in United States between 1 and 3 business days after shipment.

Synopsis "Pretrained Transformers for Text Ranking: Bert and Beyond (in English)"

The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking inmulti-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.

Customers reviews

More customer reviews
  • 0% (0)
  • 0% (0)
  • 0% (0)
  • 0% (0)
  • 0% (0)

Frequently Asked Questions about the Book

All books in our catalog are Original.
The book is written in English.
The binding of this edition is Paperback.

Questions and Answers about the Book

Do you have a question about the book? Login to be able to add your own question.

Opinions about Bookdelivery

More customer reviews