Easy Integration
Scraper API is easy to integrate. Simply send a GET request to our API
endpoint with your API key and a URL (please consult our
documentation for more advanced use cases). We will automatically route
your request through our proxies, solve CAPTCHAs, retry on failures, and
return the page's HTML. Web scraping has never been easier!
Bash
Node
Python
PHP
Ruby
curl "http://api.scraperapi.com?key=YOURAPIKEY&url=http://httpbin.org/ip"
var request = require('request');
var url = 'http://httpbin.org/ip';
request(
{
method: 'GET',
url: 'http://api.scraperapi.com/?key=YOURAPIKEY&url=' + url,
headers: {
Accept: 'application/json',
},
},
function(error, response, body) {
console.log('Status:', response.statusCode);
console.log('Response:', body);
}
);
from urllib2 import Request, urlopen
url = 'http://httpbin.org/ip';
headers = {
'Accept': 'application/json'
}
request = Request('http://api.scraperapi.com/?key=' + YOURAPIKEY + '&url=' + url, headers=headers)
response_body = urlopen(request).read()
print response_body
<?php
$ch = curl_init();
$url = "http://httpbin.org/ip";
curl_setopt($ch, CURLOPT_URL,
"http://api.scraperapi.com/?key=YOURAPIKEY&url=".$url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
"Accept: application/json"
));
$response = curl_exec($ch);
curl_close($ch);
var_dump($response);
require 'net/http'
require 'net/https'
url = 'https://httpbin.org/ip';
uri = URI.parse('http://api.scraperapi.com')
request = Net::HTTP.new(uri.host, uri.port)
request.use_ssl = true
request.verify_mode = OpenSSL::SSL::VERIFY_NONE
response = request.get('/?key=YOURAPIKEY&url=' + url)
puts response.body
Leverage our infrastructure (tens of thousands of proxies, full
browser cluster, and CAPTCHA solving technology), to build reliable,
scalable
web scrapers!
Never Get Blocked
One of the most frustrating parts of web scraping is constantly dealing
with IP bans and CAPTCHAs. Scraper API rotates IP addresses with each
request, from a pool of thousands of proxies across 8 ISPs, and
automatically retries failed requests, so you will
never be blocked. Scraper API also automates CAPTCHA solving for you, so
you can concentrate on turning websites into actionable data.
Easily Render Javascript
No more messing around with Docker containers, dynamically spinning up
browser instances, and updating and patching headless browsers. Scraper API
handles all of this for you, so you can scrape javascript rendered
pages simply by adding "&render=true" to your API call. It's really
that simple!