Downloading historical trade data is essential for backtesting strategies, building analytics dashboards, and conducting market research. Mobula’s /api/2/trades/filters endpoint makes it easy to batch download trades across any blockchain with efficient cursor-based pagination.
This cookbook walks you through downloading large volumes of trade data efficiently.
Quick Start
Here’s the simplest way to download trades for a 1-hour window:
curl -X GET "https://api.mobula.io/api/2/trades/filters?blockchain=solana&from=1706745600000&to=1706749200000&limit=5000" \
-H "Authorization: YOUR_API_KEY"
Understanding the Endpoint
Key Parameters
| Parameter | Required | Description |
|---|
from | Yes | Start timestamp (Unix ms or ISO 8601) |
to | Yes | End timestamp (Unix ms or ISO 8601) |
blockchain | No | Filter by chain (e.g., “solana”, “base”) |
tokenAddress | No | Filter by token address |
limit | No | Results per page (max: 5000, default: 1000) |
cursor | No | Pagination cursor from previous response |
sortOrder | No | ”asc” (default) or “desc” |
Constraints
- Maximum timeframe: 1 hour per request
- Maximum results: 5000 per request
- Use cursor pagination to fetch more results within a timeframe
Batch Download Strategy
Step 1: Break Down Your Timeframe
Since each request is limited to 1 hour, break your target period into 1-hour chunks:
const HOUR_MS = 60 * 60 * 1000;
function getTimeChunks(startMs: number, endMs: number): { from: number; to: number }[] {
const chunks: { from: number; to: number }[] = [];
let current = startMs;
while (current < endMs) {
chunks.push({
from: current,
to: Math.min(current + HOUR_MS, endMs)
});
current += HOUR_MS;
}
return chunks;
}
// Example: Get chunks for a 24-hour period
const startTime = Date.now() - 24 * HOUR_MS;
const endTime = Date.now();
const chunks = getTimeChunks(startTime, endTime);
console.log(`Need to make ${chunks.length} time-based requests`);
Step 2: Paginate Within Each Chunk
Each 1-hour chunk may contain more than 5000 trades. Use cursor pagination to fetch all trades:
async function fetchAllTradesForChunk(
from: number,
to: number,
blockchain?: string,
tokenAddress?: string
): Promise<Trade[]> {
const allTrades: Trade[] = [];
let cursor: string | null = null;
do {
const params = new URLSearchParams({
from: from.toString(),
to: to.toString(),
limit: '5000',
sortOrder: 'asc'
});
if (blockchain) params.set('blockchain', blockchain);
if (tokenAddress) params.set('tokenAddress', tokenAddress);
if (cursor) params.set('cursor', cursor);
const response = await fetch(
`https://api.mobula.io/api/2/trades/filters?${params}`,
{ headers: { Authorization: 'YOUR_API_KEY' } }
);
const data = await response.json();
allTrades.push(...data.data);
cursor = data.pagination.nextCursor;
console.log(`Fetched ${data.data.length} trades, total: ${allTrades.length}`);
} while (cursor);
return allTrades;
}
Step 3: Complete Download Pipeline
Combine time chunking and pagination for a complete solution:
async function downloadAllTrades(
startMs: number,
endMs: number,
blockchain?: string,
tokenAddress?: string
): Promise<Trade[]> {
const chunks = getTimeChunks(startMs, endMs);
const allTrades: Trade[] = [];
for (let i = 0; i < chunks.length; i++) {
const chunk = chunks[i];
console.log(`Processing chunk ${i + 1}/${chunks.length}: ${new Date(chunk.from).toISOString()}`);
const trades = await fetchAllTradesForChunk(
chunk.from,
chunk.to,
blockchain,
tokenAddress
);
allTrades.push(...trades);
// Rate limiting: wait between chunks
if (i < chunks.length - 1) {
await new Promise(resolve => setTimeout(resolve, 100));
}
}
console.log(`Download complete: ${allTrades.length} total trades`);
return allTrades;
}
// Example: Download 24 hours of Solana trades
const trades = await downloadAllTrades(
Date.now() - 24 * 60 * 60 * 1000,
Date.now(),
'solana'
);
Python Example
import requests
from datetime import datetime, timedelta
API_KEY = "YOUR_API_KEY"
BASE_URL = "https://api.mobula.io/api/2/trades/filters"
HOUR_MS = 3600000
def get_time_chunks(start_ms: int, end_ms: int) -> list:
chunks = []
current = start_ms
while current < end_ms:
chunks.append({
"from": current,
"to": min(current + HOUR_MS, end_ms)
})
current += HOUR_MS
return chunks
def fetch_trades_chunk(from_ms: int, to_ms: int, blockchain: str = None) -> list:
all_trades = []
cursor = None
while True:
params = {
"from": from_ms,
"to": to_ms,
"limit": 5000,
"sortOrder": "asc"
}
if blockchain:
params["blockchain"] = blockchain
if cursor:
params["cursor"] = cursor
response = requests.get(
BASE_URL,
params=params,
headers={"Authorization": API_KEY}
)
data = response.json()
all_trades.extend(data["data"])
cursor = data["pagination"]["nextCursor"]
print(f"Fetched {len(data['data'])} trades, total: {len(all_trades)}")
if not cursor:
break
return all_trades
def download_all_trades(start_ms: int, end_ms: int, blockchain: str = None) -> list:
chunks = get_time_chunks(start_ms, end_ms)
all_trades = []
for i, chunk in enumerate(chunks):
print(f"Processing chunk {i+1}/{len(chunks)}")
trades = fetch_trades_chunk(chunk["from"], chunk["to"], blockchain)
all_trades.extend(trades)
print(f"Download complete: {len(all_trades)} total trades")
return all_trades
# Example: Download 6 hours of Base chain trades
end_time = int(datetime.now().timestamp() * 1000)
start_time = end_time - (6 * HOUR_MS)
trades = download_all_trades(start_time, end_time, "base")
Working with the Response Data
Trade Object Structure
Each trade includes comprehensive data:
{
"id": "2847563921",
"operation": "regular",
"type": "buy",
"baseTokenAmount": 150.25,
"baseTokenAmountRaw": "150250000",
"baseTokenAmountUSD": 150.25,
"quoteTokenAmount": 1.5,
"quoteTokenAmountRaw": "1500000000",
"quoteTokenAmountUSD": 150.25,
"date": 1706745612345,
"blockchain": "Solana",
"transactionHash": "5xKp...abc123",
"marketAddress": "8xKp...pool123",
"baseToken": {
"address": "EPjFWdd5...",
"name": "USD Coin",
"symbol": "USDC",
"logo": "https://...",
"decimals": 6
},
"quoteToken": {
"address": "So111111...",
"name": "Wrapped SOL",
"symbol": "SOL",
"logo": "https://...",
"decimals": 9
},
"platform": { "id": "raydium", "name": "Raydium" },
"totalFeesUSD": 0.15
}
Analytics Examples
Calculate total volume:
const totalVolumeUSD = trades.reduce((sum, t) => sum + t.baseTokenAmountUSD, 0);
Group by token:
const volumeByToken = trades.reduce((acc, t) => {
const symbol = t.baseToken?.symbol || 'Unknown';
acc[symbol] = (acc[symbol] || 0) + t.baseTokenAmountUSD;
return acc;
}, {});
Find largest trades:
const largestTrades = trades
.sort((a, b) => b.baseTokenAmountUSD - a.baseTokenAmountUSD)
.slice(0, 10);
Best Practices
Use ascending sort order (sortOrder=asc) for batch downloads. This ensures consistent cursor-based pagination and chronological data ordering.
Respect rate limits: Add small delays between requests (100-200ms) to avoid hitting rate limits, especially when downloading large datasets.
Base/Quote token logic: When you provide a tokenAddress filter, that token becomes the “base” token and the type field (buy/sell) is relative to it. Without a filter, token0 from the pool is used as base.
Common Use Cases
1. Backtesting Trading Strategies
Download historical trades to simulate how your strategy would have performed.
2. Market Analysis
Analyze trading patterns, volume trends, and market maker activity.
3. Building Leaderboards
Track top traders by volume or profit across specific tokens or chains.
4. DEX Analytics Dashboards
Build real-time dashboards showing trading activity across multiple chains.
5. Whale Tracking
Identify large trades and track wallet activity for market intelligence.
Get Started
Ready to download trade data?
- Get your free API key
- Read the API reference
- Join our Discord for support