6.5 KiB
Papers API Reference
This document describes the HTTP API endpoints available when running Papers in server mode.
Running the Server
Start the server using:
papers -serve -port 8080
The server will listen on the specified port (default: 8080).
Important Notes
CORS
CORS is enabled on the server with the following configuration:
- Allowed Origins: All origins (
*
) in development - Allowed Methods:
GET
,POST
,OPTIONS
- Allowed Headers:
Accept
,Authorization
,Content-Type
- Credentials: Not allowed
- Max Age: 300 seconds
Note: In production, you should restrict allowed origins to your specific domain(s).
Authentication
No authentication is required beyond the API key for LLM processing. The API key should be included in the request body for processing endpoints.
Timing Considerations
- Initial paper search: typically < 5 seconds
- Processing time: up to 30 minutes for large batches
- Job status polling: recommended interval is 15 seconds
- LLM rate limiting: 2-second delay between requests
Endpoints
Search Papers
POST /api/papers/search
Search for papers on arXiv based on date range and query.
Request Body:
{
"start_date": "20240101", // Required: Start date in YYYYMMDD format
"end_date": "20240131", // Required: End date in YYYYMMDD format
"query": "machine learning", // Required: Search query
"max_results": 5 // Optional: Maximum number of results (1-2000, default: 100)
}
Response:
[
{
"title": "Paper Title",
"abstract": "Paper Abstract",
"arxiv_id": "2401.12345"
}
]
Process Papers
POST /api/papers/process
Process papers using the specified LLM model. Papers can be provided either directly in the request or by referencing a JSON file.
Request Body:
{
// Option 1: Direct paper data
"papers": [ // Optional: Array of papers
{
"title": "Paper Title",
"abstract": "Paper Abstract",
"arxiv_id": "2401.12345"
}
],
// Option 2: File reference
"input_file": "papers.json", // Optional: Path to input JSON file
// Criteria (one of these is required)
"criteria": "Accepted papers MUST:\n* primarily address LLMs...", // Optional: Direct criteria text
"criteria_file": "criteria.md", // Optional: Path to criteria markdown file
// Required fields
"api_key": "your-key", // Required: API key for LLM service
// Optional fields
"model": "phi-4" // Optional: Model to use (default: phi-4)
}
Notes:
- Either
papers
orinput_file
must be provided, but not both - Either
criteria
orcriteria_file
must be provided, but not both
Response:
{
"job_id": "job-20240129113500"
}
The endpoint returns immediately with a job ID. Use this ID with the job status endpoint to check progress and get results.
Search and Process Papers
POST /api/papers/search-process
Combined endpoint to search for papers and process them in one request. This endpoint automatically saves the papers to a file and processes them.
Request Body:
{
"start_date": "20240101", // Required: Start date in YYYYMMDD format
"end_date": "20240131", // Required: End date in YYYYMMDD format
"query": "machine learning", // Required: Search query
"max_results": 5, // Optional: Maximum number of results (1-2000, default: 100)
// Criteria (one of these is required)
"criteria": "Accepted papers MUST:\n* primarily address LLMs...", // Optional: Direct criteria text
"criteria_file": "criteria.md", // Optional: Path to criteria markdown file
// Required fields
"api_key": "your-key", // Required: API key for LLM service
// Optional fields
"model": "phi-4" // Optional: Model to use (default: phi-4)
}
Response:
{
"job_id": "job-20240101-20240131-machine_learning"
}
Get Job Status
GET /api/jobs/{jobID}
Check the status of a processing job and retrieve results when complete.
Response:
{
"id": "job-20240129113500",
"status": "completed", // "pending", "processing", "completed", or "failed"
"start_time": "2024-01-29T11:35:00Z",
"error": "", // Error message if status is "failed"
"markdown_text": "# Results\n..." // Full markdown content when completed
}
Processing Flow
- Submit a processing request (either
/api/papers/process
or/api/papers/search-process
) - Receive a job ID immediately
- Poll the job status endpoint until the job is completed
- When completed, the markdown content will be in the
markdown_text
field of the response
Example workflow:
# 1. Submit processing request
curl -X POST -H "Content-Type: application/json" -d '{
"start_date": "20240101",
"end_date": "20240131",
"query": "machine learning",
"criteria": "Accepted papers MUST:\n* primarily address LLMs...",
"api_key": "your-key",
"model": "phi-4"
}' http://localhost:8080/api/papers/search-process
# Response: {"job_id": "job-20240101-20240131-machine_learning"}
# 2. Check job status and get results
curl http://localhost:8080/api/jobs/job-20240101-20240131-machine_learning
# Response when completed:
{
"id": "job-20240101-20240131-machine_learning",
"status": "completed",
"start_time": "2024-01-29T11:35:00Z",
"markdown_text": "# Results\n\n## Accepted Papers\n\n1. Paper Title..."
}
# 3. Save markdown to file (example using jq)
# The -r flag is important to get raw output without JSON escaping
curl http://localhost:8080/api/jobs/job-20240101-20240131-machine_learning | jq -r '.markdown_text' > results.md
# Alternative using Python (handles JSON escaping)
curl http://localhost:8080/api/jobs/job-20240101-20240131-machine_learning | python3 -c '
import json, sys
response = json.load(sys.stdin)
if response.get("status") == "completed":
with open("results.md", "w") as f:
f.write(response["markdown_text"])
'
Note: Processing can take up to 30 minutes depending on the number of papers and LLM response times. The job status endpoint can be polled periodically (e.g., every 30 seconds) to check progress.
Error Responses
All endpoints return appropriate HTTP status codes:
- 200: Success
- 400: Bad Request (invalid parameters)
- 500: Internal Server Error
Error responses include a message explaining the error:
{
"error": "Invalid date format"
}
Rate Limiting
The server includes built-in rate limiting for LLM API requests (2-second delay between requests) to prevent overwhelming the LLM service.