PaperQA2

GitHub PyPI version tests License PyPI Python Versions

PaperQA2 is a package for doing high-accuracy retrieval augmented generation (RAG) on PDFs, text files, Microsoft Office documents, and source code files, with a focus on the scientific literature. See our recent 2024 paper to see examples of PaperQA2's superhuman performance in scientific tasks like question answering, summarization, and contradiction detection.


Table of Contents


Quickstart

In this example we take a folder of research paper PDFs, magically get their metadata - including citation counts with a retraction check, then parse and cache PDFs into a full-text search index, and finally answer the user question with an LLM agent.

Example Output

Question: Has anyone designed neural networks that compute with proteins or DNA?

The claim that neural networks have been designed to compute with DNA is supported by multiple sources. The work by Qian, Winfree, and Bruck demonstrates the use of DNA strand displacement cascades to construct neural network components, such as artificial neurons and associative memories, using a DNA-based system (Qian2011Neural pages 1-2, Qian2011Neural pages 15-16, Qian2011Neural pages 54-56). This research includes the implementation of a 3-bit XOR gate and a four-neuron Hopfield associative memory, showcasing the potential of DNA for neural network computation. Additionally, the application of deep learning techniques to genomics, which involves computing with DNA sequences, is well-documented. Studies have applied convolutional neural networks (CNNs) to predict genomic features such as transcription factor binding and DNA accessibility (Eraslan2019Deep pages 4-5, Eraslan2019Deep pages 5-6). These models leverage DNA sequences as input data, effectively using neural networks to compute with DNA. While the provided excerpts do not explicitly mention protein-based neural network computation, they do highlight the use of neural networks in tasks related to protein sequences, such as predicting DNA-protein binding (Zeng2016Convolutional pages 1-2). However, the primary focus remains on DNA-based computation.

What is PaperQA2

PaperQA2 is engineered to be the best agentic RAG model for working with scientific papers. Here are some features:

  • A simple interface to get good answers with grounded responses containing in-text citations.

  • State-of-the-art implementation including document metadata-awareness in embeddings and LLM-based re-ranking and contextual summarization (RCS).

  • Support for agentic RAG, where a language agent can iteratively refine queries and answers.

  • Automatic redundant fetching of paper metadata, including citation and journal quality data from multiple providers.

  • A usable full-text search engine for a local repository of PDF/text files.

  • A robust interface for customization, with default support for all LiteLLM models.

By default, it uses OpenAI embeddings and models with a Numpy vector DB to embed and search documents. However, you can easily use other closed-source, open-source models or embeddings (see details below).

PaperQA2 depends on some awesome libraries/APIs that make our repo possible. Here are some in no particular order:

PaperQA2 vs PaperQA

We've been working hard on fundamental upgrades for a while and mostly followed SemVer, until December 2025. Meaning we've incremented the major version number on each breaking change. This brings us to the current major version number v5. So why call is the repo now called PaperQA2? We wanted to remark on the fact though that we've exceeded human performance on many important metrics. So we arbitrarily call version 5 and onward PaperQA2, and versions before it as PaperQA1 to denote the significant change in performance. We recognize that we are challenged at naming and counting at FutureHouse, so we reserve the right at any time to arbitrarily change the name to PaperCrow.

PaperQA2 Goes CalVer in December 2025

Prior to December 2025 we used semantic versioning. This eventually led to confusion in two ways:

  1. Developers: should we major version bump based on settings or fundamental system capabilities? What if a bug fix requires breaking changes to the agent's behaviors?

  2. Speaking: should one use terminology from our publications (e.g. PaperQA1, PaperQA2) or the Git tags (e.g. v5) from this repo/package? When someone says "PaperQA" -- what version do they mean?

To resolve these confusions, in December 2025, we moved to calendar versioning. The developer burden is diminished because we're basically removing guarantees of backwards compatibility across releases (as CalVer is ZeroVer bound to dates). It solves the "speaking" issue because Git tags are now quite different from publication terminology (e.g. PaperQA2 vs v2025.12.17). When someone says "PaperQA" it will just refer to the system, not a particular snapshot of agentic behaviors. When someone says "PaperQA2" it will refer to paper-qa>=5, which applies to both SemVer tags v5.0.0 and the new CalVer tags v2025.12.17.

This switch is backwards compatible for version 5's SemVer, as the year 2025 is strictly greater than major version 5.

What's New in Version 5 (aka PaperQA2)?

Version 5 added:

  • A CLI pqa

  • Agentic workflows invoking tools for paper search, gathering evidence, and generating an answer

  • Removed much of the statefulness from the Docs object

  • A migration to LiteLLM for compatibility with many LLM providers as well as centralized rate limits and cost tracking

  • A bundled set of configurations (read this section here)) containing known-good hyperparameters

Note that Docs objects pickled from prior versions of PaperQA are incompatible with version 5, and will need to be rebuilt. Also, our minimum Python version was increased to Python 3.11.

What's New in December 2025?

The last four months since version 5.29.1 have seen many changes:

  • New modalities: tables, figures, non-English languages, math equations

  • More and better readers

    • Two new model-based PDF readers: Docling and Nvidia nemotron-parse

    • All PDF readers now can parse images and tables, report page numbers, support DPI

    • A reader for Microsoft Office data types

  • Multimodal contextual summarization

    • Media objects are also passed to the summary_llm during creation

    • Media objects' embedding space is enhanced using an enrichment_llm prompt

  • Simpler and performant HTTP stack

    • Consolidation from aiohttp and httpx to just httpx

    • Integration with httpx-aiohttp for performance

  • Context relevance is simplified and some assumptions were removed

  • Many minor features such as retrying Context creation upon invalid JSON, compatibility with fall 2025's frontier LLMs, and improved prompt templates

  • Multiple fixes in metadata processing via Semantic Scholar and OpenAlex, and metadata processing (e.g. incorrectly inferring identical document IDs for main text and SI)

  • Completed the deprecations accrued over the past year

PaperQA2 Algorithm

To understand PaperQA2, let's start with the pieces of the underlying algorithm. The default workflow of PaperQA2 is as follows:

Phase
PaperQA2 Actions

1. Paper Search

- Get candidate papers from LLM-generated keyword query

- Chunk, embed, and add candidate papers to state

2. Gather Evidence

- Embed query into vector

- Rank top k document chunks in current state

- Create scored summary of each chunk in the context of the current query

- Use LLM to re-score and select most relevant summaries

3. Generate Answer

- Put best summaries into prompt with context

- Generate answer with prompt

The tools can be invoked in any order by a language agent. For example, an LLM agent might do a narrow and broad search, or using different phrasing for the gather evidence step from the generate answer step.

Installation

For a non-development setup, install PaperQA2 (aka version 5) from PyPI. Note version 5 requires Python 3.11+.

For development setup, please refer to the CONTRIBUTING.md file.

PaperQA2 uses an LLM to operate, so you'll need to either set an appropriate API key environment variable (i.e. export OPENAI_API_KEY=sk-...) or set up an open source LLM server (i.e. using llamafile. Any LiteLLM compatible model can be configured to use with PaperQA2.

If you need to index a large set of papers (100+), you will likely want an API key for both Crossref and Semantic Scholar, which will allow you to avoid hitting public rate limits using these metadata services. Those can be exported as CROSSREF_API_KEY and SEMANTIC_SCHOLAR_API_KEY variables.

CLI Usage

The fastest way to test PaperQA2 is via the CLI. First navigate to a directory with some papers and use the pqa cli:

You will see PaperQA2 index your local PDF files, gathering the necessary metadata for each of them (using Crossref and Semantic Scholar), search over that index, then break the files into chunked evidence contexts, rank them, and ultimately generate an answer. The next time this directory is queried, your index will already be built (save for any differences detected, like new added papers), so it will skip the indexing and chunking steps.

All prior answers will be indexed and stored, you can view them by querying via the search subcommand, or access them yourself in your PQA_HOME directory, which defaults to ~/.pqa/.

PaperQA2 is highly configurable, when running from the command line, pqa --help shows all options and short descriptions. For example to run with a higher temperature:

You can view all settings with pqa view. Another useful thing is to change to other templated settings - for example fast is a setting that answers more quickly and you can see it with pqa -s fast view

Maybe you have some new settings you want to save? You can do that with

and then you can use it with

If you run pqa with a command which requires a new indexing, say if you change the default chunk_size, a new index will automatically be created for you.

You can also use pqa to do full-text search with use of LLMs view the search command. For example, let's save the index from a directory and give it a name:

Now I can search for papers about thermoelectrics:

or I can use the normal ask

Both the CLI and module have pre-configured settings based on prior performance and our publications, they can be invoked as follows:

Bundled Settings

Inside src/paperqa/configs we bundle known useful settings:

Setting Name
Description

high_quality

Highly performant, relatively expensive (due to having evidence_k = 15) query using a ToolSelector agent.

fast

Setting to get answers cheaply and quickly.

wikicrow

Setting to emulate the Wikipedia article writing used in our WikiCrow publication.

contracrow

Setting to find contradictions in papers, your query should be a claim that needs to be flagged as a contradiction (or not).

debug

Setting useful solely for debugging, but not in any actual application beyond debugging.

tier1_limits

Settings that match OpenAI rate limits for each tier, you can use tier<1-5>_limits to specify the tier.

Rate Limits

If you are hitting rate limits, say with the OpenAI Tier 1 plan, you can add them into PaperQA2. For each OpenAI tier, a pre-built setting exists to limit usage.

This will limit your system to use the tier1_limits, and slow down your queries to accommodate.

You can also specify them manually with any rate limit string that matches the specification in the limits module:

Or by adding into a Settings object, if calling imperatively:

Library Usage

PaperQA2's full workflow can be accessed via Python directly:

Please see our installation docs for how to install the package from PyPI.

Agentic Adding/Querying Documents

The answer object has the following attributes: formatted_answer, answer (answer alone), question , and context (the summaries of passages found for answer). ask will use the SearchPapers tool, which will query a local index of files, you can specify this location via the Settings object:

ask is just a convenience wrapper around the real entrypoint, which can be accessed if you'd like to run concurrent asynchronous workloads:

The default agent will use an LLM based agent, but you can also specify a "fake" agent to use a hard coded call path of search -> gather evidence -> answer to reduce token usage.

Manual (No Agent) Adding/Querying Documents

Normally via agent execution, the agent invokes the search tool, which adds documents to the Docs object for you behind the scenes. However, if you prefer fine-grained control, you can directly interact with the Docs object.

Note that manually adding and querying Docs does not impact performance. It just removes the automation associated with an agent picking the documents to add.

Async

PaperQA2 is written to be used asynchronously. The synchronous API is just a wrapper around the async. Here are the methods and their async equivalents:

Sync
Async

Docs.add

Docs.aadd

Docs.add_file

Docs.aadd_file

Docs.add_url

Docs.aadd_url

Docs.get_evidence

Docs.aget_evidence

Docs.query

Docs.aquery

The synchronous version just calls the async version in a loop. Most modern python environments support async natively (including Jupyter notebooks!). So you can do this in a Jupyter Notebook:

Choosing Model

By default, PaperQA2 uses OpenAI's gpt-4o-2024-11-20 model for the summary_llm, llm, and agent_llm. Please see the Settings Cheatsheet for more information on these settings. PaperQA2 also defaults to using OpenAI's text-embedding-3-small model for the embedding setting. If you don't have an OpenAI API key, you can use a different embedding model. More information about embedding models can be found in the "Embedding Model" section.

We use the lmi package for our LLM interface, which in turn uses litellm to support many LLM providers. You can adjust this easily to use any model supported by litellm:

To use Claude, make sure you set the ANTHROPIC_API_KEY environment variable. In this example, we also use a different embedding model. Please make sure to pip install paper-qa[local] to use a local embedding model.

Or Gemini, by setting the GEMINI_API_KEY from Google AI Studio

Locally Hosted

You can use llama.cpp to be the LLM. Note that you should be using relatively large models, because PaperQA2 requires following a lot of instructions. You won't get good performance with 7B models.

The easiest way to get set-up is to download a llama file and execute it with -cb -np 4 -a my-llm-model --embedding which will enable continuous batching and embeddings.

Models hosted with ollama are also supported. To run the example below make sure you have downloaded llama3.2 and mxbai-embed-large via ollama.

Embedding Model

Embeddings are used to retrieve k texts (where k is specified via Settings.answer.evidence_k) for re-ranking and contextual summarization. If you don't want to use embeddings, but instead just fetch all chunks, disable "evidence retrieval" via the Settings.answer.evidence_retrieval setting.

PaperQA2 defaults to using OpenAI (text-embedding-3-small) embeddings, but has flexible options for both vector stores and embedding choices.

Specifying the Embedding Model

The simplest way to specify the embedding model is via Settings.embedding:

embedding accepts any embedding model name supported by litellm. PaperQA2 also supports an embedding input of "hybrid-<model_name>" i.e. "hybrid-text-embedding-3-small" to use a hybrid sparse keyword (based on a token modulo embedding) and dense vector embedding, where any litellm model can be used in the dense model name. "sparse" can be used to use a sparse keyword embedding only.

Embedding models are used to create PaperQA2's index of the full-text embedding vectors (texts_index argument). The embedding model can be specified as a setting when you are adding new papers to the Docs object:

Note that PaperQA2 uses Numpy as a dense vector store. Its design of using a keyword search initially reduces the number of chunks needed for each answer to a relatively small number < 1k. Therefore, NumpyVectorStore is a good place to start, it's a simple in-memory store, without an index. However, if a larger-than-memory vector store is needed, you can an external vector database like Qdrant via the QdrantVectorStore class.

The hybrid embeddings can be customized:

The sparse embedding (keyword) models default to having 256 dimensions, but this can be specified via the ndim argument.

Local Embedding Models (Sentence Transformers)

You can use a SentenceTransformerEmbeddingModel model if you install sentence-transformers, which is a local embedding library with support for HuggingFace models and more. You can install it by adding the local extras.

and then prefix embedding model names with st-:

or with a hybrid model

Adjusting number of sources

You can adjust the numbers of sources (passages of text) to reduce token usage or add more context. k refers to the top k most relevant and diverse (may from different sources) passages. Each passage is sent to the LLM to summarize, or determine if it is irrelevant. After this step, a limit of max_sources is applied so that the final answer can fit into the LLM context window. Thus, k > max_sources and max_sources is the number of sources used in the final answer.

Using Code or HTML

You do not need to use papers -- you can use code or raw HTML. Note that this tool is focused on answering questions, so it won't do well at writing code. One note is that the tool cannot infer citations from code, so you will need to provide them yourself.

Multimodal Support

Multimodal support centers on:

  • Standalone images

  • Images or tables in PDFs

The Docs object stores media via a ParsedMedia object. When chunking a document, media are not split at chunk boundaries, so it's possible 2+ chunks can correspond with the same media. This means within PaperQA each chunk has a one-to-many relationship between ParsedMedia and chunks.

Depending on the source document, the same image can appear multiple times (e.g. each page of a PDF has a logo in the margins). Thus, clients should consider media databases to have a many-to-many relationship with chunks.

Since PaperQA's evidence gathering process centers on text-based retrieval, it's possible relevant image(s) or table(s) aren't retrieved because their associated text content is irrelevant. For a concrete example, imagine the figure in a paper has a terse caption and is placed one page after relevant main-text discussion. To solve this problem, PaperQA supports media enrichment at document read-time. Basically after reading in the PDF, the parsing.enrichment_llm is given the parsing.enrichment_prompt and co-located text to generate a synthetic caption for every image/table. The synthetic captions are used to shift the embeddings of each text chunk, but are kept separate from the actual source text. This way evidence gathering can fetch relevant images/tables without risk of polluting contextual summaries with LLM-generated captions.

If you want multimodal PDF reading, but do not want enrichment (since adds one LLM prompt/media at read-time), enrichment can be disabled by setting parsing.multimodal to ON_WITHOUT_ENRICHMENT.

When creating contextual summaries on a given chunk (a Text), the summary LLM is passed both the chunk's text and the chunk's associated media, but the output contextual summary itself remains text-only.

If you would like, specifying the prompt paperqa.prompts.summary_json_multimodal_system_prompt to the setting prompt.summary_json_system will include a used_images flag attributing usage of images in any contextual summarizations.

Using External DB/Vector DB and Caching

You may want to cache parsed texts and embeddings in an external database or file. You can then build a Docs object from those directly:

Creating Index

Indexes will be placed in the home directory by default. This can be controlled via the PQA_HOME environment variable.

Indexes are made by reading files in the IndexSettings.paper_directory. By default, we recursively read from subdirectories of the paper directory, unless disabled using IndexSettings.recurse_subdirectories. The paper directory is not modified in any way, it's just read from.

Manifest Files

The indexing process attempts to infer paper metadata like title and DOI using LLM-powered text processing. You can avoid this point of uncertainty using a "manifest" file, which is a CSV containing DocDetails fields (order doesn't matter). For example:

  • file_location: relative path to the paper's PDF within the index directory

  • doi: DOI of the paper

  • title: title of the paper

By providing this information, we ensure queries to metadata providers like Crossref are accurate.

To ease creating a manifest, there is a helper class method Doc.to_csv, which also works when called on DocDetails.

Reusing Index

The local search indexes are built based on a hash of the current Settings object. So make sure you properly specify the paper_directory to your IndexSettings object. In general, it's advisable to:

  1. Pre-build an index given a folder of papers (can take several minutes)

  2. Reuse the index to perform many queries

Using Clients Directly

One of the most powerful features of PaperQA2 is its ability to combine data from multiple metadata sources. For example, Unpaywall can provide open access status/direct links to PDFs, Crossref can provide bibtex, and Semantic Scholar can provide citation licenses. Here's a short demo of how to do this:

the client.query is meant to check for exact matches of title. It's a bit robust (like to casing, missing a word). There are duplicates for titles though - so you can also add authors to disambiguate. Or you can provide a doi directly client.query(doi="10.1038/s42256-024-00832-8").

If you're doing this at a large scale, you may not want to use ALL_CLIENTS (just omit the argument) and you can specify which specific fields you want to speed up queries. For example:

will return much faster than the first query and we'll be certain the authors match.

Settings Cheatsheet

Setting
Default
Description

llm

"gpt-4o-2024-11-20"

LLM for general use including metadata inference (see Docs.aadd) and answer generation (see Docs.aquery and gen_answer tool).

llm_config

None

Optional configuration for llm.

summary_llm

"gpt-4o-2024-11-20"

LLM for creating contextual summaries (see Docs.aget_evidence and gather_evidence tool).

summary_llm_config

None

Optional configuration for summary_llm.

embedding

"text-embedding-3-small"

Embedding model for embedding text chunks when adding papers.

embedding_config

None

Optional configuration for embedding.

temperature

0.0

Temperature for LLMs.

batch_size

1

Batch size for calling LLMs.

texts_index_mmr_lambda

1.0

Lambda for MMR in text index.

verbosity

0

Integer verbosity level for logging (0-3). 3 = all LLM/Embeddings calls logged.

custom_context_serializer

None

Custom async function (see typing for signature) to override the default answer context serialization.

answer.evidence_k

10

Number of evidence pieces to retrieve.

answer.evidence_retrieval

True

Use retrieval vs processing all docs.

answer.evidence_summary_length

"about 100 words"

Length of evidence summary.

answer.evidence_skip_summary

False

Whether to skip summarization.

answer.evidence_text_only_fallback

False

Whether to allow context creation to retry without media present.

answer.answer_max_sources

5

Max number of sources for an answer.

answer.max_answer_attempts

None

Max attempts to generate an answer.

answer.answer_length

"about 200 words, but can be longer"

Length of final answer.

answer.max_concurrent_requests

4

Max concurrent requests to LLMs.

answer.answer_filter_extra_background

False

Whether to cite background info from model.

answer.get_evidence_if_no_contexts

True

Allow lazy evidence gathering.

answer.group_contexts_by_question

False

Groups the final contexts by the underlying gather_evidence question in the final context prompt.

answer.evidence_relevance_score_cutoff

1

Cutoff evidence relevance score to include in the answer context (inclusive)

answer.skip_evidence_citation_strip

False

Skip removal of citations from the gather_evidence contexts

parsing.page_size_limit

1,280,000

Character limit per page.

parsing.use_doc_details

True

Whether to get metadata details for docs.

parsing.reader_config

dict

Optional keyword arguments for the document reader.

parsing.multimodal

True

Control to parse both text and media from applicable documents, as well as potentially enriching them with text descriptions.

parsing.defer_embedding

False

Whether to defer embedding until summarization.

parsing.parse_pdf

paperqa_pypdf.parse_pdf_to_pages

Function to parse PDF files.

parsing.configure_pdf_parser

No-op

Callable to configure the PDF parser within parse_pdf, useful for behaviors such as enabling logging.

parsing.doc_filters

None

Optional filters for allowed documents.

parsing.use_human_readable_clinical_trials

False

Parse clinical trial JSONs into readable text.

parsing.enrichment_llm

"gpt-4o-2024-11-20"

LLM for media enrichment.

parsing.enrichment_llm_config

None

Optional configuration for enrichment_llm.

parsing.enrichment_page_radius

1

Page radius for context text in enrichment.

parsing.enrichment_prompt

image_enrichment_prompt_template

Prompt template for enriching media.

parsing.citation_prompt

citation_prompt

Prompt to create citation from peeking one chunk.

parsing.structured_citation_prompt

structured_citation_prompt

Prompt to create a citation (in JSON) from peeking one chunk.

parsing.disable_doc_valid_check

False

Flag to disable checking if a document looks like text (was parsed correctly).

prompts.summary

summary_prompt

Template for summarizing text, must contain variables matching summary_prompt.

prompts.qa

qa_prompt

Template for QA, must contain variables matching qa_prompt.

prompts.select

select_paper_prompt

Template for selecting papers, must contain variables matching select_paper_prompt.

prompts.pre

None

Optional pre-prompt templated with just the original question to append information before a qa prompt.

prompts.post

None

Optional post-processing prompt that can access PQASession fields.

prompts.system

default_system_prompt

System prompt for the model.

prompts.use_json

True

Whether to use JSON formatting.

prompts.summary_json

summary_json_prompt

JSON-specific summary prompt.

prompts.summary_json_system

summary_json_system_prompt

System prompt for JSON summaries.

prompts.context_outer

CONTEXT_OUTER_PROMPT

Prompt for how to format all contexts in generate answer.

prompts.context_inner

CONTEXT_INNER_PROMPT

Prompt for how to format a single context in generate answer. Must contain 'name' and 'text' variables.

prompts.answer_iteration_prompt

answer_iteration_prompt_template

Prompt to inject existing prior answers to allow iteration. Default injects no prior answers.

agent.agent_llm

"gpt-4o-2024-11-20"

LLM inside the agent making tool selections.

agent.agent_llm_config

None

Optional configuration for agent_llm.

agent.agent_type

"ToolSelector"

Type of agent to use.

agent.agent_config

None

Optional kwarg for AGENT constructor.

agent.agent_system_prompt

env_system_prompt

Optional system prompt message.

agent.agent_prompt

env_reset_prompt

Agent prompt.

agent.return_paper_metadata

False

Whether to include paper title/year in search tool results.

agent.search_count

8

Search count.

agent.timeout

500.0

Timeout on agent execution (seconds).

agent.tool_names

None

Optional override on tools to provide the agent.

agent.max_timesteps

None

Optional upper limit on environment steps.

agent.agent_evidence_n

1

Top n ranked evidences shown to the agent after gathering evidence.

agent.rebuild_index

True

Flag to rebuild the index at the start of agent runners.

agent.callbacks

{}

Named lists of callables to be invoked with environment state.

agent.index.name

None

Optional name of the index.

agent.index.paper_directory

Current working directory

Directory containing papers to be indexed.

agent.index.manifest_file

None

Path to manifest CSV with document attributes.

agent.index.index_directory

pqa_directory("indexes")

Directory to store PQA indexes.

agent.index.use_absolute_paper_directory

False

Whether to use absolute paper directory path.

agent.index.recurse_subdirectories

True

Whether to recurse into subdirectories when indexing.

agent.index.concurrency

5

Number of concurrent filesystem reads.

agent.index.sync_with_paper_directory

True

Whether to sync index with paper directory on load.

agent.index.batch_size

1

Number of files to process before committing to the index.

agent.index.files_filter

lambda f: f.suffix in {...}

Filter function to mark files in the paper directory to index.

Where do I get papers?

Well that's a really good question! It's probably best to just download PDFs of papers you think will help answer your question and start from there.

See detailed docs about zotero, openreview and parsing

Callbacks

To execute a function on each chunk of LLM completions, you need to provide a function that can be executed on each chunk. For example, to get a typewriter view of the completions, you can do:

Caching Embeddings

In general, embeddings are cached when you pickle a Docs regardless of what vector store you use. So as long as you save your underlying Docs object, you should be able to avoid re-embedding your documents.

Customizing Prompts

You can customize any of the prompts using settings.

Pre and Post Prompts

Following the syntax above, you can also include prompts that are executed after the query and before the query. For example, you can use this to critique the answer.

FAQ

How come I get different results than your papers?

Internally at FutureHouse, we have a slightly different set of tools. We're trying to get some of them, like citation traversal, into this repo. However, we have APIs and licenses to access research papers that we cannot share openly. Similarly, in our research papers' results we do not start with the known relevant PDFs. Our agent has to identify them using keyword search over all papers, rather than just a subset. We're gradually aligning these two versions of PaperQA, but until there is an open-source way to freely access papers (even just open source papers) you will need to provide PDFs yourself.

How is this different from LlamaIndex or LangChain?

LangChain and LlamaIndex are both frameworks for working with LLM applications, with abstractions made for agentic workflows and retrieval augmented generation.

Over time, the PaperQA team over time chose to become framework-agnostic, instead outsourcing LLM drivers to LiteLLM and no framework besides Pydantic for its tools. PaperQA focuses on scientific papers and their metadata.

PaperQA can be reimplemented using either LlamaIndex or LangChain. For example, our GatherEvidence tool can be reimplemented as a retriever with an LLM-based re-ranking and contextual summary. There is similar work with the tree response method in LlamaIndex.

Can I save or load?

The Docs class can be pickled and unpickled. This is useful if you want to save the embeddings of the documents and then load them later.

Reproduction

Contained in docs/2024-10-16_litqa2-splits.json5 are the question IDs used in train, evaluation, and test splits, as well as paper DOIs used to build the splits' indexes.

There are multiple papers slowly building PaperQA, shown below in Citation. To reproduce:

  • skarlinski2024language: train and eval splits are applicable. The test split remains held out.

  • narayanan2024aviarytraininglanguageagents: train, eval, and test splits are applicable.

Example on how to use LitQA for evaluation can be found in aviary.litqa.

Citation

Please read and cite the following papers if you use this software:

Last updated