Python binding for curl-impersonate fork via cffi. A http client that can impersonate browser tls/ja3/http2 fingerprints.
Python binding for curl-impersonate fork
via cffi. For commercial support, visit impersonate.pro.
Unlike other pure python http clients like httpx
or requests
, curl_cffi
can
impersonate browsersβ TLS/JA3 and HTTP/2 fingerprints. If you are blocked by some
website for no obvious reason, you can give curl_cffi
a try.
Python 3.9 is the minimum supported version since v0.10.
Maintenance of this project is made possible by all the contributors and sponsors. If youβd like to sponsor this project and have your avatar or company logo appear below click here. π
Scrape Google and other search engines from SerpApiβs fast, easy, and complete API. 0.66s average response time (β€ 0.5s for Ludicrous Speed Max accounts), 99.95% SLAs, pay for successful responses only.
Yescaptcha is a proxy service that bypasses Cloudflare and uses the API interface to
obtain verified cookies (e.g. cf_clearance
). Click here
to register: https://yescaptcha.com/i/stfnIO
CapSolver
is an AI-powered tool that easily bypasses Captchas, allowing uninterrupted access to
public data. It supports a variety of Captchas and works seamlessly with curl_cffi
,
Puppeteer, Playwright, and more. Fast, reliable, and cost-effective. Plus, curl_cffi
users can use the code βCURLβ to get an extra 6% balance! and register here.
asyncio
with proxy rotation on each request.requests | aiohttp | httpx | pycurl | curl_cffi | |
---|---|---|---|---|---|
http2 | β | β | β | β | β |
sync | β | β | β | β | β |
async | β | β | β | β | β |
websocket | β | β | β | β | β |
fingerprints | β | β | β | β | β |
speed | π | ππ | π | ππ | ππ |
pip install curl_cffi --upgrade
This should work on Linux, macOS and Windows out of the box.
If it does not work on you platform, you may need to compile and install curl-impersonate
first and set some environment variables like LD_LIBRARY_PATH
.
To install beta releases:
pip install curl_cffi --upgrade --pre
To install unstable version from GitHub:
git clone https://github.com/lexiforest/curl_cffi/
cd curl_cffi
make preprocess
pip install .
curl_cffi
comes with a low-level curl
API and a high-level requests
-like API.
v0.9:
from curl_cffi import requests
r = requests.get("https://tls.browserleaks.com/json", impersonate="chrome")
v0.10:
import curl_cffi
# Notice the impersonate parameter
r = curl_cffi.get("https://tls.browserleaks.com/json", impersonate="chrome")
print(r.json())
# output: {..., "ja3n_hash": "aa56c057ad164ec4fdcb7a5a283be9fc", ...}
# the js3n fingerprint should be the same as target browser
# To keep using the latest browser version as `curl_cffi` updates,
# simply set impersonate="chrome" without specifying a version.
# Other similar values are: "safari" and "safari_ios"
r = curl_cffi.get("https://tls.browserleaks.com/json", impersonate="chrome")
# Randomly choose a browser version based on current market share in real world
# from: https://caniuse.com/usage-table
# NOTE: this is a pro feature.
r = curl_cffi.get("https://example.com", impersonate="realworld")
# To pin a specific version, use version numbers together.
r = curl_cffi.get("https://tls.browserleaks.com/json", impersonate="chrome124")
# To impersonate other than browsers, bring your own ja3/akamai strings
# See examples directory for details.
r = curl_cffi.get("https://tls.browserleaks.com/json", ja3=..., akamai=...)
# http/socks proxies are supported
proxies = {"https": "http://localhost:3128"}
r = curl_cffi.get("https://tls.browserleaks.com/json", impersonate="chrome", proxies=proxies)
proxies = {"https": "socks://localhost:3128"}
r = curl_cffi.get("https://tls.browserleaks.com/json", impersonate="chrome", proxies=proxies)
v0.9:
from curl_cffi import requests
s = requests.Session()
v0.10:
s = curl_cffi.Session()
# httpbin is a http test website, this endpoint makes the server set cookies
s.get("https://httpbin.org/cookies/set/foo/bar")
print(s.cookies)
# <Cookies[<Cookie foo=bar for httpbin.org />]>
# retrieve cookies again to verify
r = s.get("https://httpbin.org/cookies")
print(r.json())
# {'cookies': {'foo': 'bar'}}
curl_cffi
supports the same browser versions as supported by my fork of curl-impersonate:
Open source version of curl_cffi includes versions whose fingerprints differ from previous versions.
If you see a version, e.g. chrome135
, were skipped, you can simply impersonate it with your own headers and the previous version.
If you donβt want to look up the headers etc, by yourself, consider buying commercial support from impersonate.pro,
we have comprehensive browser fingerprints database for almost all the browser versions on various platforms.
If you are trying to impersonate a target other than a browser, use ja3=...
and akamai=...
to specify your own customized fingerprints. See the docs on impersonation for details.
Browser | Open Source | Pro version |
---|---|---|
Chrome | chrome99, chrome100, chrome101, chrome104, chrome107, chrome110, chrome116[1], chrome119 [1], chrome120 [1], chrome123 [3], chrome124 [3], chrome131 [4], chrome133a [5][6] | chrome132, chrome134, chrome135 |
Chrome Android | chrome99_android, chrome131_android [4] | chrome132_android, chrome133_android, chrome134_android, chrome135_android |
Chrome iOS | N/A | coming soon |
Safari | safari15_3 [2], safari15_5 [2], safari17_0 [1], | coming soon |
Safari iOS | safari17_2_ios [1], safari18_0 [4], safari18_0_ios [4] | coming soon |
Firefox | firefox133 [5], firefox135 [7] | coming soon |
Firefox Android | N/A | firefox135_android |
Edge | edge99, edge101 | edge133, edge135 |
Opera | N/A | coming soon |
Brave | N/A | coming soon |
Notes:
0.6.0
.0.6.0
, previous http2 fingerprints were not correct.0.7.0
.0.8.0
.0.9.0
.-a
(e.g. chrome133a
) means that this is an alternative version, i.e. the fingerprint has not been officially updated by browser, but has been observed because of A/B testing.0.10.0
.from curl_cffi import AsyncSession
async with AsyncSession() as s:
r = await s.get("https://example.com")
More concurrency:
import asyncio
from curl_cffi import AsyncSession
urls = [
"https://google.com/",
"https://facebook.com/",
"https://twitter.com/",
]
async with AsyncSession() as s:
tasks = []
for url in urls:
task = s.get(url)
tasks.append(task)
results = await asyncio.gather(*tasks)
from curl_cffi import WebSocket
def on_message(ws: WebSocket, message: str | bytes):
print(message)
ws = WebSocket(on_message=on_message)
ws.run_forever("wss://api.gemini.com/v1/marketdata/BTCUSD")
For low-level APIs, Scrapy integration and other advanced topics, see the
docs for more details.
import asyncio
from curl_cffi import AsyncSession
async with AsyncSession() as s:
ws = await s.ws_connect("wss://echo.websocket.org")
await asyncio.gather(*[ws.send_str("Hello, World!") for _ in range(10)])
async for message in ws:
print(message)