Using wget proxy setups is pretty simple once you know the basics. In this guide, we'll walk through how to make wget use a proxy server, so you can grab files or send requests even when you're behind a corporate firewall or just want extra privacy.
Nothing fancy — just clear steps and examples you can actually use.

Quick answer
You can make wget use a proxy server using command flags (-e use_proxy=yes), config files like .wgetrc, or environment variables. For production-grade rotation and geolocation, the easiest path is ScrapingBee's API or proxy mode — one clean command, zero maintenance.
Here's a ready-to-copy example for wget set proxy when wget using proxy with ScrapingBee:
wget -e use_proxy=yes \
-e http_proxy=http://proxy.scrapingbee.com:8886 \
--proxy-user="YOUR_API_KEY" \
--proxy-password="premium_proxy=True" \
https://example.com
Quick options
- Basic proxy via flags:
-e use_proxy=yes -e http_proxy=http://HOST:PORT - Authenticated proxy:
--proxy-user=USER --proxy-password=PASS - Config file:
~/.wgetrcor/etc/wgetrc - Environment vars:
http_proxy,https_proxy - ScrapingBee: API or proxy mode for rotation, JS rendering, and geolocation
What is wget?
wget is a small command-line tool for downloading stuff from the internet — files, web pages, whole directories, whatever. It works without a browser, doesn't need user interaction, and runs fine in scripts or cron jobs. When you set up a wget proxy, you're basically telling it how to reach the internet when there's a firewall, corporate gateway, or privacy requirement in the way.
It supports HTTP, HTTPS, and FTP, and it's built to retry, resume, and keep going even on flaky connections. Simple idea, solid tool — that's why it's still around.
What wget supports (and what it doesn't)
wget is pretty chill with standard proxy setups, but it has limits you should know before you start wiring things up. It supports the usual HTTP, HTTPS, and FTP proxy environment variables. So, if you need a wget https proxy setup or just a basic proxy wget workflow, it'll work out of the box.
What it doesn't support is SOCKS5. At all. If you need SOCKS, don't torture yourself — just use cURL, it handles SOCKS like a champ.
Learn how to download files with curl in our tutorial.
Also, there's the no_proxy variable: it tells wget which domains should skip the proxy completely. Handy when some internal URLs break if they go through the gateway. Just list them there and wget won't route those through your proxy.
Short version: HTTP/HTTPS/FTP — yes. SOCKS5 — nope.
Prerequisites and installation
Before you mess with any proxy settings, make sure wget is actually installed on your system. Most Linux distros ship with it, but not all — and macOS/Windows usually need a manual install. Nothing complicated.
Linux (Debian/Ubuntu)
sudo apt install wget
Linux (RHEL/CentOS/Fedora)
sudo dnf install wget
(Use yum instead of dnf on older systems.)
macOS (Homebrew)
brew install wget
Windows
Grab a prebuilt binary from the GNU project or install it through something like Chocolatey:
choco install wget
Once it's in place, you're good to go. Check if wget is installed using the following command:
wget -V
Setting up a wget proxy or any proxy-related config only works if the tool is installed and visible in your PATH.
wget commands
Here are the basic wget moves you'll actually use. Just the core commands so you don't have to Google them every time.
Download a single file
The golden classic:
wget https://example.com/file.zip
wget saves the file into the current directory with the same name as on the server (file.zip in the example above).
Download a file to a specific directory
If you want the file somewhere else, just cd there first or use -P to point wget to a folder:
wget -P /path/to/dir https://example.com/file.zip
Learn how to automate file downloads with Python and wget in our tutorial.
Rename a downloaded file
If the remote filename is ugly or you want something cleaner:
wget -O newname.zip https://example.com/file.zip
The -O parameter always overwrites, so pick the name wisely.
Define yourself as User-Agent
Some servers get picky about what client you use. If you need to pretend to be a browser (or anything else), set a custom User-Agent:
wget --user-agent="Mozilla/5.0" https://example.com/page.html
Useful when default wget gets blocked (but don't expect miracles).
Limit speed
If you don't want to hog bandwidth, cap the download rate:
wget --limit-rate=500k https://example.com/big.iso
Use k or m for kilobytes/megabytes per second.
Extract as Google bot
Sometimes you want to see what Googlebot sees. Easy:
wget --user-agent="Googlebot/2.1 (+http://www.google.com/bot.html)" https://example.com
Just don't abuse it — some servers take this seriously.
Convert links on a page
When you're pulling down a page for offline viewing, you don't want half the links still pointing to the live website. That's where --convert-links comes in.
What it does is pretty simple: after the file is downloaded, wget goes through the HTML and rewrites any links that it also downloaded so they point to the local copies instead of the original URLs. Basically, it patches the page so you can browse it offline without every click shooting you back to the internet.
wget --convert-links https://example.com/page.html
Handy when you're saving docs, manuals, or small sites where everything lives in static files. No broken links, no "why is this still trying to load from the web?", just a neat little local snapshot.
Mirroring single webpages
Sometimes you don't need a whole site — just one page fully intact, with all the stuff it depends on. That's what this combo is for:
wget --page-requisites --html-extension --convert-links https://example.com/page
--page-requisites grabs all the assets the page calls (images, CSS, JS).--html-extension makes sure the saved file ends with .html so browsers open it normally.--convert-links rewrites links so the page uses your local copies instead of the live website.
This way you'll get an offline clone of that one page that actually looks and works like the online one.
Extract multiple URLs
If you've got a list of URLs in a file:
wget -i urls.txt
wget will chew through the list line by line. Ideal for batch grabs.
Example urls.txt:
https://example.com/file1.pdf
https://example.com/images/pic.png
https://example.com/archive.zip
How to configure a proxy with wget
By default, wget connects straight to the target server. No detours, no filters. That's fine until you hit something annoying like blocked pages, rate limits, geo restrictions, or when you just want your real IP to stay hidden. Using a proxy fixes all that.
With ScrapingBee, you get two ways to do it: the super-simple API call, or the more traditional proxy mode. And if you're not using ScrapingBee, the same four proxy methods still apply to any normal proxy.
Below is the clean setup.
Step 1 – ScrapingBee setup
ScrapingBee can be used in two flavors:
- API mode — the easiest. You don't configure any proxy at all; you just call their API and they handle everything.
- Proxy mode — looks like a normal HTTP proxy, perfect if you already know your way around wget.
Both ways work fine; use whatever matches your workflow.
Before proceeding, make sure to grab your free trial at app.scrapingbee.com/account/register with 1,000 credits as a gift (plenty for testing!).
Option A: API mode (simplest, no proxy config)
Just call the API directly with your target URL:
wget "https://app.scrapingbee.com/api/v1/?api_key=YOUR_API_KEY&url=https://example.com"
The ScrapingBee API key can be found in your dashboard.
Option B: Proxy mode (recommended for wget power users)
ScrapingBee exposes regular HTTP/HTTPS proxy endpoints:
- HTTP:
http://proxy.scrapingbee.com:8886 - HTTPS:
https://proxy.scrapingbee.com:8887 - SOCKS5:
socks5://socks.scrapingbee.com:8888, if you ever need it in other projects (wget does not support it)
Authentication and parameters: ScrapingBee uses the proxy URL itself to pass both your API key and any extra settings.
- Username — your API key
- Password — your parameters (everything after the colon)
- Parameters are written as a query-string:
render_js=False&premium_proxy=True&country=us
Example full proxy URL:
https://YOUR_API_KEY:render_js=False&premium_proxy=True@proxy.scrapingbee.com:8887
Step 2 – One-off test via command flags
This is the fastest way to confirm the proxy works without config files or any additional setup. Just run the command and see it fetch through the proxy.
wget -e use_proxy=yes \
-e http_proxy=http://proxy.scrapingbee.com:8886 \
--proxy-user="YOUR_API_KEY" \
--proxy-password="premium_proxy=True" \
https://example.com
To verify the proxy is actually active, print your exit IP:
wget -qO- -e use_proxy=yes \
-e http_proxy=http://proxy.scrapingbee.com:8886 \
--proxy-user="YOUR_API_KEY" \
--proxy-password="premium_proxy=True" \
https://api.ipify.org
If the IP is different from your real one — good, you're tunneling through ScrapingBee properly.
Step 3 – Persistent user config (~/.wgetrc)
If you don't want to type flags every time, drop your proxy settings into ~/.wgetrc. After that, every wget command automatically goes through ScrapingBee.
Example:
use_proxy = on
http_proxy = http://proxy.scrapingbee.com:8886
https_proxy = https://proxy.scrapingbee.com:8887
proxy_user = YOUR_API_KEY
proxy_password = premium_proxy=True
no_proxy = localhost,127.0.0.1,.internal.local
Heads-up: ~/.wgetrc is plain text, so anyone with access to your machine can see your API key. Use with care.
Step 4 – Use and adjust features
Once everything's configured, you run wget like normal. ScrapingBee handles proxy rotation, JS rendering, premium IP pools, all behind the scenes.
Basic download using your saved proxy:
wget https://example.com
To switch features, modify the password string in your config or command:
render_js=True— turn on JS renderingpremium_proxy=True— use higher-quality IPscountry_code=US— geo-target the request (premium required)
Combine them like this:
render_js=True&premium_proxy=True&country_code=US
That's the whole flow: quick setup, full control, and no weird hacks.
Alternative: Generic 4 methods (any proxy provider)
These setups work with any normal proxy: residential, datacenter, ISP, corporate gateway, whatever. Just swap in your own HOST, PORT, USERNAME, and PASSWORD. No provider-specific magic here.
Method 1 – Command flags (one-off proxy)
Quick, disposable, perfect for testing:
wget -e use_proxy=yes -e http_proxy=http://HOST:PORT https://example.com
With authentication:
wget -e use_proxy=yes \
-e http_proxy=http://HOST:PORT \
--proxy-user=USERNAME \
--proxy-password=PASSWORD \
https://example.com
Verify the proxy is active:
wget -qO- \
-e use_proxy=yes \
-e http_proxy=http://HOST:PORT \
--proxy-user="USERNAME" \
--proxy-password="PASSWORD" \
https://api.ipify.org
Method 2 – User config (~/.wgetrc)
This is your personal wget config — settings here apply only to your user account.
Perfect when you want persistent proxy settings without touching system-wide files.
Where to put it:
- Linux / macOS — Create a file named
.wgetrcin your home directory:~/.wgetrc. - Windows — Try either
C:\Users\<USERNAME>\.wgetrcorC:\Users\<USERNAME>\_wgetrcfile (might depend on your wget build).
Here's the example config:
use_proxy = on
http_proxy = http://HOST:PORT
https_proxy = https://HOST:PORT
proxy_user = USERNAME
proxy_password = PASSWORD
no_proxy = example.com,.internal.local
After saving this file, every wget command automatically uses your proxy settings.
Heads-up: This file is plain text, so anyone with access to your account can read your credentials. Handle with care.
Method 3 – System config (/etc/wgetrc)
A machine-wide config, applied to all users:
use_proxy = on
http_proxy = http://HOST:PORT
https_proxy = https://HOST:PORT
Requires root access. Don't put passwords here unless you know what you're doing.
Note that this approach is applicable only for Nix systems.
Method 4 – Environment variables
Useful for scripts, containers, and temporary sessions.
Linux/macOS:
export http_proxy=http://HOST:PORT
export https_proxy=https://HOST:PORT
Windows (CMD):
set http_proxy=http://HOST:PORT
set https_proxy=https://HOST:PORT
Disable on Linux/macOS:
unset http_proxy
unset https_proxy
Disable on Windows:
set http_proxy=
set https_proxy=
Common wget proxy errors
Proxy not used
- Make sure
use_proxy = onis set, or that your env vars are in the current shell. - Check for typos in
http_proxy/https_proxy.
407 Proxy Authentication Required
- Wrong username or password.
- Some proxies expect
http://USER:PASS@HOST:PORTformat.
400 Bad Request
- HOST or PORT is incorrect.
- You may have leftover env vars —
unset http_proxyand try again.
SSL errors
- Use an HTTPS proxy when fetching HTTPS URLs.
- Update your CA bundle (
ca-certificatespackage). - Avoid
--no-check-certificateunless you're just testing; it's unsafe for real use.
Authenticated proxies
When you're wget using proxy servers that require a username and password, you've got two clean ways to pass your credentials: short-term command flags or a persistent entry in ~/.wgetrc. Both work fine, and both let you run wget through proxy setups without hassle — you just need to be careful with special characters in passwords.
Using command flags (quick and temporary)
This is the safest option for local use because the password isn't stored on disk:
wget -e use_proxy=yes \
-e http_proxy=http://HOST:PORT \
--proxy-user="USERNAME" \
--proxy-password="PASSWORD" \
https://example.com
If your password contains characters like @, :, &, $, or spaces, wrap it in quotes. Some shells may require escaping for \&, \$, \@. On Windows CMD, escaping rules differ, and quoting is usually enough.
Example with escaping:
--proxy-password="p@ssw0rd\&more"
To confirm the proxy is really being used:
wget -qO- \
-e use_proxy=yes \
-e http_proxy=http://HOST:PORT \
--proxy-user="USERNAME" \
--proxy-password="PASSWORD" \
https://api.ipify.org
If you see the proxy's exit IP, you're good.
Using ~/.wgetrc (persistent config)
If you run wget constantly and don't want to repeat flags, drop everything into your user config:
use_proxy = on
http_proxy = http://HOST:PORT
https_proxy = https://HOST:PORT
proxy_user = USERNAME
proxy_password = PASSWORD
This makes every request go through the authenticated proxy unless you override it.
Once again, don't forget that on Windows wget config typically lives in C:\Users\<username>\.wgetrc.
Handling credentials safely (CI, automation, shared systems)
If you're scripting or running in CI/CD:
- Inject creds via environment variables. Example:
export PROXY_PASS="PASSWORD"
wget --proxy-password="$PROXY_PASS" …
- Use a password manager to generate a temp token and store it only in memory.
- Avoid committing
~/.wgetrcinto containers or images; build it at runtime instead.
Short version:
- Flags — safest for ad-hoc use
~/.wgetrc— convenient for day-to-day work- CI — environment injection.
Authenticated proxies work cleanly with wget if you quote passwords properly and avoid leaving secrets lying around.
Rate limiting, retries, and no-proxy rules
When you're running wget via proxy, it's smart to control how hard you hit the target site and decide which hosts should skip the proxy entirely. wget has a few simple flags for this, and they make a big difference in reliability.
Limit download speed
If you don't want to blast the proxy or the website, cap your speed:
wget --limit-rate=200k https://example.com/file.zip
--limit-rate slows wget down by capping the maximum download speed. It prevents you from hammering the proxy or the target site and keeps your traffic smooth and predictable.
You can use k or m units (e.g., 200k, 2m).
Add retries and backoff
When a proxy or target server flakes out, these help avoid pointless failures:
wget --tries=10 --waitretry=10 https://example.com/data.json
--tries=10— maximum number of attempts for this download.--waitretry=10— wait 10 seconds between failed attempts, and increase the delay if failures keep happening.
These options only kick in when something goes wrong: connection timeouts, DNS hiccups, dropped connections, or certain HTTP errors from the server (like 5xx / rate limiting). If the file downloads fine on the first try, wget doesn't wait or retry at all.
If the problem is permanent (wrong URL, hard block, real 404), retries will just run until they hit the limit and then give up.
Using the no_proxy rule
Sometimes you want wget no proxy behavior for certain hosts like internal services, local dev machines, VPN-only domains, etc. The no_proxy variable handles that:
export no_proxy="localhost,127.0.0.1,.internal.local"
Wildcard-style domains (like .internal.local) tell wget to skip the proxy for anything ending with that suffix.
Windows version (CMD):
set no_proxy=localhost,127.0.0.1,.internal.local
Example combined with a proxy:
export http_proxy=http://HOST:PORT
export no_proxy="localhost,127.0.0.1,.corp"
Now internal hosts go direct, everything else goes through the proxy.
Short and practical: limit speed, add retries, use no_proxy to avoid breaking internal URLs — that's the whole playbook.
Rotating proxies with wget
If you want to run wget through proxy servers that rotate on every request, the simplest DIY method is a plain text list plus a small shell loop. It's not fancy, but it works, especially for quick experiments. For anything serious, though, maintaining your own rotation becomes a grind fast.
Basic shell rotation (proxies.txt + shuf)
Create a file with one proxy per line:
http://USER:PASS@PROXY1:PORT
http://USER:PASS@PROXY2:PORT
http://USER:PASS@PROXY3:PORT
Then pick a random one before each request:
PROXY=$(shuf -n 1 proxies.txt)
wget -e use_proxy=yes \
-e http_proxy="$PROXY" \
https://example.com
Windows (CMD) version:
for /f %%p in ('powershell -NoProfile -Command "(Get-Content proxies.txt) | Get-Random"') do (
set PROXY=%%p
wget -e use_proxy=yes -e http_proxy=%PROXY% https://example.com
)
Loop it if you have multiple URLs:
while read -r url; do
PROXY=$(shuf -n 1 proxies.txt)
wget -e use_proxy=yes -e http_proxy="$PROXY" "$url"
done < urls.txt
Windows (CMD) version:
for /f %%u in (urls.txt) do (
for /f %%p in ('powershell -NoProfile -Command "(Get-Content proxies.txt) | Get-Random"') do (
set "PROXY=%%p"
wget -e use_proxy=yes -e http_proxy=%PROXY% "%%u"
)
)
This gives you basic using wget proxy rotation without extra tools.
Downsides of DIY rotation
A rotating free or scraped proxy list has the usual issues:
- Most free proxies die constantly — Half your list goes offline within hours.
- High ban rate — Public proxies are usually overused and already blocked by many sites.
- Credential headache — If each proxy has different creds, keeping them updated becomes a mess.
- Maintenance burden — You end up babysitting the list instead of actually doing your scraping or mirroring.
When to avoid self-managed rotation
If you're doing any real scraping, backups, or site mirroring, rolling your own rotation gets painful. Managed services handle:
- automatic IP rotation
- residential or premium pools
- geolocation targeting
- ban handling and retries
- consistent uptime
DIY works for testing. For production, use proper managed rotation so you can focus on the actual task, not fixing dead proxies every hour.
Choosing the right approach
There's no single "best" wget proxy setup as it depends on how you work. Here's the quick decision guide so you don't overthink it and don't end up duct-taping random configs together.
One-off test
Use command flags. Fast, clean, nothing saved on disk.
wget -e use_proxy=yes -e http_proxy=http://HOST:PORT https://example.com
Perfect when you just want to see wget use proxy behavior without committing to anything.
Regular local use or CI pipelines
Use ~/.wgetrc. Your user or job gets consistent proxy behavior, no repeated flags, and scripts stay clean.
Great for: dev machines, cron jobs, CI runners where you want predictable behavior per user.
Fleet, containers, or shared servers
Use /etc/wgetrc (unless you're on Windows, duh). Machine-wide defaults keep everything aligned. Every container, user, or automated job uses the same proxy rules.
Useful when you have multiple services calling wget and want the same routing everywhere. Containers can also use ENV-based proxy settings (often easier than baking configs into images).
Need rotation, geolocation, or JS rendering
Use ScrapingBee API or proxy mode. That's the simplest path — no juggling dead proxies, no maintaining lists, no weird rotation logic.
- API mode — easiest (no proxy config at all).
- Proxy mode — behaves like a normal proxy but gives rotation, geo, and JS behind the scenes.
Short version
- Flags for quick checks
~/.wgetrcfor personal or CI use/etc/wgetrcfor shared systems- ScrapingBee when you want real proxy features without messing with DIY infrastructure.
Ready to download smarter with wget?
If you want your wget proxy setup to just work (no dead IPs, no constant list updates, no weird failures) ScrapingBee gives you rotation, premium IP pools, and optional geolocation without any of the maintenance pain. You can run wget through proxy mode like a normal HTTP proxy, or skip proxy config entirely and hit a single API URL.
Either way, you get stable, managed traffic instead of juggling free lists that break every hour. If you're ready to make downloads cleaner and scraping more reliable, you can get started today.
Conclusion
wget stays useful because it's simple, predictable, and script-friendly. Adding a proxy just makes it even more flexible. Whether you're testing with quick flags, wiring permanent settings into your config files, or leaning on a managed service for rotation and geo-targeting, you've got multiple clean ways to shape how your requests hit the web.
Use the lightweight methods when you only need a proxy occasionally, and switch to a managed solution when you want stability without babysitting IP lists. Once you pick the approach that fits your workflow, wget becomes a much more capable tool for scraping, backups, automation, and anything else you throw at it.
wget proxy FAQs
How do I use a proxy with wget for a single command?
Use the -e use_proxy=yes flag and point http_proxy to your server. This is the fastest way to run wget with proxy once without changing your system config.
Example:
wget -e use_proxy=yes -e http_proxy=http://HOST:PORT https://example.com
How can I set an authenticated proxy for wget without exposing credentials in my shell history?
Use environment variables or a temporary script instead of typing the password directly. For example, export PROXY_PASS and pass it with --proxy-password="$PROXY_PASS". This keeps your wget proxy credentials out of history and logs.
Does wget support SOCKS5 proxies?
No. wget has no native SOCKS5 support. It only handles HTTP, HTTPS, and FTP proxies. For SOCKS5, you'll need tools like cURL or a local SOCKS-to-HTTP bridge. For wget https proxy setups, stick to standard HTTP/HTTPS endpoints.
What's the difference between setting http_proxy env vars and .wgetrc?
Environment variables apply only to the current shell or script, while .wgetrc gives you a persistent configuration for every call. On Windows the user config lives in C:\Users\<user>\.wgetrc instead of ~/.wgetrc.
Use env vars for temporary changes and .wgetrc for long-term setups where you always want wget set proxy rules applied.
How do I bypass the proxy for specific domains with wget (no_proxy)?
Set the no_proxy variable with a comma-separated list like localhost,.internal.local. wget will connect directly to those domains even when a global proxy is configured. This is the clean way to do wget no proxy routing for internal services.
What's the simplest way to get rotating/geolocated IPs with wget?
Use a managed provider like ScrapingBee in API or proxy mode. This gives you rotation, geo-targeting, and premium IP pools without maintaining lists. It's the easiest reliable method to run wget through proxy setups at scale.


