Open-domain question answering (QA) has recently made significant progress, with generative models like Transformers demonstrating impressive performance. However, these models are computationally expensive to train and query, limiting their practical application. In this whitepaper, we introduce a novel approach to open-domain QA that combines the strengths of retrieval and generative models, aiming to achieve more efficient and accurate question answering. Our approach, termed Fusion-in-Decoder, retrieves informative passages and leverages them with a sequence-to-sequence model to generate answers. This method demonstrates state-of-the-art results on benchmarks like Natural Questions and TriviaQA, and offers a highly scalable framework for aggregating and combining information from multiple passages.
Updated about 2 months ago