User agents are the browser identifiers that ride along with every HTTP request. In scraping, rotating realistic user agents helps reduce soft-blocks and CAPTCHA while improving reliability across diverse targets.
In this guide, I'll walk you through an updated 2026 list of user agents for web scraping, show rotation patterns that actually work, and explain how a Scraping API like ScrapingBee automates the whole job. By the end, you’ll know the best user agents for web scraping, how to manage them manually, and when to let an API handle them for you.
Quick Answer
A user agent identifies the browser or device making a web request. That's why, when scraping, you should rotate user agents to avoid being singled out and blocked. But you don't always have to do this manually.
Services such as ScrapingBee automate this by assigning dynamic, realistic user agents paired with coherent headers on every request. This way, you don’t need to maintain a list of user agents for scraping yourself.
If you want a single takeaway: don’t stick with one signature. Rotate across the best user agent for scraping scenarios and keep them current.
What Is a User Agent and Why Does It Matter for Scraping
Let's start with the basics. A user-agent header is a string like Mozilla/5.0 (… ) that tells the server what client (browser, OS, device) is calling. It’s part of the HTTP request and often used for browser detection, analytics, and feature toggles.
Here’s a minimal user agent example:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/130.0.0.0 Safari/537.36
That browser user agent string advertises a Windows 10, 64-bit Chrome client and includes the ubiquitous AppleWebKit/537.36 (KHTML, like Gecko) segment that many modern Chromium-based browsers present.
In scraping, you use user agents to look like normal traffic and to access data extraction paths that might otherwise be gated to specific browsers. For canonical definitions and behavior, see MDN’s reference on the User-Agent header and ScrapingBee’s extraction features in the documentation.
How Websites Use User Agents to Detect Bots
Modern bot defenses don’t rely on user agents alone, but improper or outdated strands can trigger friction.
Systems correlate the UA with other signals:
Header coherence: Accept-Language, Accept, Sec-CH-UA client hints, and compression must align with the claimed browser.
TLS and HTTP/2 fingerprints: cipher suites, ALPN, flow control.
JavaScript behavior: canvas, WebGL, timezone, fonts, and headless traits.
IP reputation and geo: residential vs. datacenter and velocity patterns.
Here's a real example: requests made with default identifiers like python-requests/2.x or curl/7.x gets rate-limited or CAPTCHA-challenged quickly on big surfaces (e.g., search, marketplaces).
Suppose you request Amazon or Google Search with curl/7.XX plus a bare User-Agent, you’ll often see 429s, 503s, or interstitial HTML with a CAPTCHA form. Scraping engines address this by pairing browser user agents with browser-like headers and rotating IPs. Keep a user agent list, but also ensure the whole request looks authentic, or use an Amazon API.
Best User Agents for Web Scraping in 2026
If you want to stay on the safe side while scraping, you need to mirror what real users run today. Chrome remains the global leader, followed by Safari (especially on mobile), then Edge and Firefox. I recommend using a mix that reflects reality and your target audience (desktop vs. mobile).
StatCounter’s recent data shows Chrome’s dominance, which justifies biasing toward Chrome user agent list entries.
Below are current, reliable examples you can use as seeds. But don't follow this list blindly; instead, always refresh your user agent string list monthly (or auto-refresh via an API).
Chrome (Desktop)
Windows 10 / 11 (Win64; x64)
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36macOS (Apple Silicon / Intel)
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36
(Adjust Macintosh; Intel → Macintosh; arm64 for Apple Silicon.
Firefox (Desktop)
Windows 10 / 11
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:131.0) Gecko/20100101 Firefox/131.0macOS
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:131.0) Gecko/20100101 Firefox/131.0
Keep in mind that pages that respond differently to Gecko. However, these UAs are helpful for A/B behavior or when Chrome fingerprints get rate-limited.
Safari (Desktop, macOS)
- Safari on macOS
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.6 Safari/605.1.15
(Swap Intel for arm64 on Apple Silicon, and bump Version/ for new macOS releases.)
Use these for sites that are optimized for Safari, media and publisher stacks, or when your analytics must reflect macOS traffic.
Edge (Desktop)
- Windows 10 / 11
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0
These are best for Microsoft properties and enterprise SaaS tuned to Edge signatures.
Mobile (Useful for layout changes or m-only content)
Android / Chrome Mobile
Mozilla/5.0 (Linux; Android 14; Pixel 7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Mobile Safari/537.36iPhone / iOS Safari
Mozilla/5.0 (iPhone; CPU iPhone OS 17_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.6 Mobile/15E148 Safari/604.1
By the way, for Google SERP scraping API style work or AMPs, mobile UAs can show different HTML structures and module placements.
Common Mistakes When Using User Agents
Even with the best user agent list, small inconsistencies can trip detection:
Outdated strings: A Chrome/88 in 2026 is a red flag.
Header mismatch: Modern Chrome uses Client Hints (Sec-CH-UA, etc.). If you claim Chrome but omit these or set impossible values, you stand out.
Ignoring IP reputation: A pristine UA won’t save a noisy datacenter IP.
Single UA overuse: Hammering thousands of requests with one UA and one IP invites user agent banned events.
Wrong platform mix: Claiming Mozilla/5.0 Macintosh Intel Mac while sending Windows-only TLS/JAA traits is suspicious.
What to do if you get blocked or empty responses? Here are some troubleshooting tips:
Log full request/response: headers, HTTP status, interstitial HTML (look for “captcha”, “robot”, or JS challenges).
Switch UA family: rotate from Chrome to Firefox or different user agents (mobile vs. desktop).
Pair headers: ensure Accept, Accept-Language, Accept-Encoding, and Client Hints match the claimed UA.
Rotate IPs and geos: combine UA rotation with proxy pools.
Render JavaScript or take a Screenshot API snapshot to verify what the server returns to real browsers.
Let me tell you about a recent debugging case that I had. I scraped a publisher with a custom user agent and got 200 OK + blank HTML. To resolve it, I switched to Mozilla/5.0 Windows NT 10.0; Win64; x64 AppleWebKit/537.36 (KHTML, like Gecko) UA and enabled headless rendering. This shows that a realistic user agent paired with a coherent browser context is a key to success.
How to Rotate and Manage User Agents at Scale
You can roll your own rotation using Python (requests, httpx) and a user agent list you refresh regularly. Keep a small, active user agent pool (20–80 strings) with desktop and mobile coverage. Pair each UA with a header template and proxy.
Here's how rotation with requests looks:
import random, requests, itertools, time
USER_AGENTS = [
# Chrome Win
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36",
# Chrome macOS (Intel)
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36",
# Firefox Win
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:131.0) Gecko/20100101 Firefox/131.0",
# Mobile Chrome
"Mozilla/5.0 (Linux; Android 14; Pixel 7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Mobile Safari/537.36",
]
HEADER_TEMPLATES = [
# Chrome-like
{
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Sec-Fetch-Site": "none", "Sec-Fetch-Mode": "navigate",
"Sec-Fetch-User": "?1", "Sec-Fetch-Dest": "document",
"Upgrade-Insecure-Requests": "1",
},
# Firefox-like (simplified)
{
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Upgrade-Insecure-Requests": "1",
},
]
def rolling_pool(items):
while True:
for i in items:
yield i
ua_cycle = rolling_pool(USER_AGENTS)
hdr_cycle = rolling_pool(HEADER_TEMPLATES)
def fetch(url, proxy=None, timeout=30):
ua = next(ua_cycle)
headers = dict(next(hdr_cycle))
headers["User-Agent"] = ua
proxies = {"http": proxy, "https": proxy} if proxy else None
r = requests.get(url, headers=headers, proxies=proxies, timeout=timeout)
return r
# Example
for _ in range(10):
resp = fetch("https://example.org")
time.sleep(random.uniform(0.8, 3.2))
Rotation with httpx and fake-useragent:
import httpx, random, time
from fake_useragent import UserAgent
ua = UserAgent() # beware: pin versions; refresh periodically
def get_headers():
ua_str = ua.random
return {
"User-Agent": ua_str,
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Upgrade-Insecure-Requests": "1",
}
with httpx.Client(http2=True, timeout=30) as client:
for _ in range(10):
r = client.get("https://example.org", headers=get_headers())
time.sleep(random.uniform(0.5, 2.0))
Maintaining user agent patterns, keeping the latest user agents, and synchronizing headers with Client Hints is challenging. This is where APIs and JavaScript Scraper capabilities are helpful, as they ensure a consistent, browser-accurate profile without the maintenance tax.
Automating User-Agent Rotation with ScrapingBee
ScrapingBee is a scraping API that runs real headless browsers, rotates proxies, and returns clean HTML or structured JSON. In practice, it assigns realistic, rotating agents and keeps the header as well as the fingerprint story coherent.
You get extras like JavaScript rendering, Google SERP helpers, and anti-bot tactics out of the box. Our docs show features such as data extraction rules, JavaScript scenarios for clicks/inputs, and specialty endpoints (Google, Amazon). This tool helps to avoid UA drift and reduces your operational load.
ScrapingBee’s documentation highlights interacting with pages via JavaScript scenario steps and extracting structured data directly, capabilities that require a real browser context, where headers and user agents are set consistently for you.
If you care about AI Web Scraping workflows or marketplace targets (e.g., Amazon Scraping API), the managed route removes the tedium of UA and proxy upkeep.
Building a Real Scraper Without Manual User-Agent Handling
Now, let's build a real scraper that doesn't require manual user agent handling.
Here's the full code:
import requests
API_KEY = "YOUR_API_KEY"
params = {
"api_key": API_KEY,
"url": "https://target.example/page",
"render_js": "true", # headless browser when needed
"extract_rules": '{"title":"h1","price":".price"}' # optional JSON rules
}
resp = requests.get("https://app.scrapingbee.com/api/v1/", params=params, timeout=60)
data = resp.json() if resp.headers.get("content-type","").startswith("application/json") else resp.text
print(data)
Here, you don’t set a user agent header at all; the service supplies a realistic user agent and coherent headers, rotates IPs, and can render JS.
You can also pair it with No code scraping with Make if you prefer a visual workflow.
When to Use APIs Like ScrapingBee Instead of Manual User-Agent Lists
If you're running a small scraping operation, manual rotation of the most popular user agents is easy to manage. However, I recommend moving to a managed API when these points become relevant:
Scale: You’re above a few thousand requests/day or hitting many domains.
Dynamic pages: Client-side rendering, login flows, infinite scroll.
Quality of service: SLAs, dashboards, retries, and analytics matter.
Compliance: You want a vendor that supports robots.txt respect and rate management.
Team time: Your engineers should focus on Data Extraction, not the best user agent for scraping debates.
Here's a quick tip: real-world workflows often integrate ScrapingBee's No code web scraper – n8n to orchestrate scraping. Give it a try.
Ready to Extract Data More Efficiently?
If your backlog is full of “update the user agent list again” chores, it’s time to switch mental models. With ScrapingBee, user agents for web scraping, proxies, headless browsing, and anti-bot tweaks are the platform's responsibility.
You focus on business logic, whether that’s ChatGPT Scraping API pipelines, Walmart Scraping API product checks, or standard ETL. The result is fewer issues, more predictable throughput, and cleaner data.
Other Mistakes When Using User Agents
Before you run off back to your scraping projects, I'd like to finish up this article with common mistakes. We all do them, so it's best to learn before common issues, such as outdated user agents, ruin your progress.
Here's a quick checklist:
Stale versions: keep the latest user agents; pin major versions within the current cycle.
Unrealistic combos: don’t claim 5.0 Macintosh Intel Mac and then send Android headers.
Neglecting Client Hints: modern Chromium advertises Sec-CH-UA families; suppressing them entirely can look odd.
Forgetting mobile: some sites only expose APIs or layouts to mobile UAs.
No backoff: retries without jitter look robotic.
Assuming UA = success: remember IPs, TLS, JS, and cookies.
If you see blocked or empty responses with your user agent examples, grab a Screenshot via an API to verify the UI and check response differences. That's it! Now you know everything to kick-start your scraping project with the right user agents.
User Agent FAQs
What is a user agent in web scraping?
It’s the HTTP header that identifies your client software (browser/device). Servers use it for feature targeting and analytics; scrapers use it to look like typical traffic.
Why do websites block scrapers with default user agents?
Defaults like python-requests/2.x or curl/7.x are obvious automation signals. Pairing a realistic user agent with proper headers and IP hygiene helps you pass basic screens.
What are the most effective user agents for scraping in 2026?
Chrome-family strings reflecting current releases are the safest baseline, followed by Safari (mobile/desktop), Edge, and Firefox. Choose based on your target audience; keep the set fresh. See popular user agents from WhatIsMyBrowser and market share from StatCounter.
How often should I update my user-agent list?
Every 30–45 days or whenever major versions roll out. Pull from sources that track the most popular user agents and “latest” directories.
How can I rotate user agents automatically in Python?
Use a pool and cycle through it per request, pairing user agent header values with coherent header templates. Libraries like fake-useragent help, but maintain them carefully and pin versions.
How does ScrapingBee handle user-agent rotation automatically?
You don’t set a UA at all. ScrapingBee runs real headless browsers and returns the content, handling headers, JavaScript rendering, and proxies on its side. Their docs show JavaScript scenarios and data extraction features that require a consistent browser context.
What’s the difference between a user agent and a browser fingerprint?
A user agent is just a header string. A fingerprint is the composite of headers, TLS, canvas, fonts, timezone, and more. Matching UA + fingerprint is crucial for reliability.
Can I scrape sites safely using mobile user agents?
Yes, mobile UAs often produce simpler DOMs or different endpoints. Rotate responsibly and respect site policies. Mobile strings like Android Chrome or iOS Safari should be part of your pool.
Are there legal or ethical considerations when using fake user agents?
Yes. Always review terms of service, robots.txt, and local laws. Use ethical scraping patterns: rate limits, caching, and data minimization. A Scraping API can help enforce rate and compliance constraints programmatically.

Kevin worked in the web scraping industry for 10 years before co-founding ScrapingBee. He is also the author of the Java Web Scraping Handbook.
