API Reference
Base URL: https://scrapen.run
Getting Started
- Sign up at scrapen.com
- Create an API key in the dashboard
- Make your first request
curl -X POST https://scrapen.run/v1/scrape \-H "Authorization: Bearer sc_live_your_key_here" \-H "Content-Type: application/json" \-d '{"url": "https://x.com/LLMJunky/status/2036239240300818751"}'
Node.js Quickstart
The fetch API is built into Node 18+. No packages needed. Create a file called `scrape.mjs` and run it with `node scrape.mjs`.
npm init -y// scrape.mjsconst res = await fetch('https://scrapen.run/v1/scrape', {method: 'POST',headers: {'Authorization': 'Bearer sc_live_your_key_here','Content-Type': 'application/json',},body: JSON.stringify({url: 'https://x.com/LLMJunky/status/2036239240300818751',}),});const data = await res.json();console.log(data);
Python Quickstart
Install the requests library, then create a file called `scrape.py`. Run it with `python scrape.py`.
pip install requests# scrape.pyimport requestsres = requests.post("https://scrapen.run/v1/scrape",headers={"Authorization": "Bearer sc_live_your_key_here"},json={"url": "https://x.com/LLMJunky/status/2036239240300818751"},)print(res.json())
Authentication
All API requests require an API key passed in the Authorization header:
Authorization: Bearer sc_live_your_key_here
API keys are created in the dashboard. Keys start with sc_live_.
POST
/v1/scrapeScrape a URL and get structured data back.
Parameters
| Field | Type | Required | Description |
|---|---|---|---|
| url | string | Yes | The URL to scrape |
Response
| Field | Type | Description |
|---|---|---|
| requestId | string | Unique identifier for this request |
| platform | string | Detected platform (e.g. twitter, reddit) |
| url | string | The URL that was scraped |
| items | object[] | Array of scraped items |
| type | string | Item type (post, comment, etc.) |
| externalId | string | Platform-specific ID |
| parentId | string | null | Parent item ID for replies |
| author | string | Author username |
| text | string | Item text content |
| createdAt | string | ISO 8601 timestamp |
| metrics | object | Engagement metrics (likes, shares, etc.) |
Request
curl -X POST https://scrapen.run/v1/scrape \-H "Authorization: Bearer sc_live_..." \-H "Content-Type: application/json" \-d '{"url": "https://x.com/LLMJunky/status/2036239240300818751"}'
Response
{"requestId": "req_abc123","platform": "twitter","url": "https://x.com/LLMJunky/status/2036239240300818751","items": [{"type": "post","externalId": "2036239240300818751","parentId": null,"author": "LLMJunky","text": "...","createdAt": "2026-03-15T10:30:00Z","metrics": {"likes": 639,"retweets": 39,"replies": 57}}]}
Try it
POST /v1/playgroundGET
/v1/usageGet your API usage for the current billing period.
Response
| Field | Type | Description |
|---|---|---|
| period | object | Current billing period |
| start | string | Period start (ISO 8601) |
| end | string | Period end (ISO 8601) |
| creditsUsed | number | Credits consumed this period |
| creditsLimit | number | Plan credit limit |
| creditsRemaining | number | Plan credits remaining |
| purchasedCredits | number | Extra purchased credits |
| totalRemaining | number | Plan + purchased credits remaining |
| requests | number | Total requests this period |
Request
curl https://scrapen.run/v1/usage \-H "Authorization: Bearer sc_live_..."
Response
{"period": {"start": "2026-03-01T00:00:00Z","end": "2026-04-01T00:00:00Z"},"creditsUsed": 1250,"creditsLimit": 19000,"creditsRemaining": 17750,"purchasedCredits": 2000,"totalRemaining": 19750,"requests": 1180}
GET
/v1/platformsList all supported platforms. No authentication required.
Response
| Field | Type | Description |
|---|---|---|
| platforms | object[] | List of supported platforms |
| name | string | Platform identifier |
| domains | string[] | Accepted domain patterns |
| status | string | Platform status (active, beta) |
| docsUrl | string | Platform-specific documentation URL |
Request
curl https://scrapen.run/v1/platforms
Response
{"platforms": [{"name": "twitter","domains": ["x.com","twitter.com"],"status": "active","docsUrl": "..."},{"name": "reddit","domains": ["reddit.com"],"status": "active","docsUrl": "..."}]}
Errors
The API returns errors in Problem Details format:
{"type": "https://api.scrapen.dev/errors/rate_limited","status": 429,"code": "rate_limited","message": "Rate limit exceeded","requestId": "req_abc123","docUrl": "https://docs.scrapen.dev/errors/rate_limited","retryable": true}
Status Codes
| Status | Code | Description |
|---|---|---|
| 400 | bad_request | Invalid URL or parameters |
| 401 | unauthorized | Invalid or missing API key |
| 402 | no_credits | Credit balance exhausted |
| 429 | rate_limited | Rate limit exceeded |
| 500 | internal_error | Internal server error |
Rate Limits
Rate limits are enforced per API key:
| Plan | Rate | Concurrency |
|---|---|---|
| Free | 10 req/min | 1 |
| Pro | 60 req/min | 10 |
Rate Limit Headers
RateLimit-Policy: 60;w=60RateLimit: limit=60, remaining=58, reset=42Retry-After: 2