How do I display all URLs in a redirect chain?

I’m looking for a way to show all of the URLs in a redirect chain, preferably from the shell. I’ve found a way to almost do it with curl, but it only shows the first and last URL. I’d like to see all of them.

There must be a way to do this simply, but I can’t for the life of me find what it is.

Edit: Since submitting this I’ve found out how to do it with Chrome (CTRL+SHIFT+I->Network tab). But, I’d still like to know how it can be done from the Linux command line.

Asked By: felwithe

||

How about simply using wget?

$ wget http://picasaweb.google.com 2>&1 | grep Location:
Location: /home [following]
Location: https://www.google.com/accounts/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2&ltmpl=gp&passive=true [following]
Location: https://accounts.google.com/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%3A%2F%2Fpicasaweb.google.com%2Fhome&service=lh2&ltmpl=gp&passive=true [following]

curl -v also shows some info, but looks not as useful as wget.

$ curl -v -L http://picasaweb.google.com 2>&1 | egrep "^> (Host:|GET)"
> GET / HTTP/1.1
> Host: picasaweb.google.com
> GET /home HTTP/1.1
> Host: picasaweb.google.com
> GET /accounts/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2&ltmpl=gp&passive=true HTTP/1.1
> Host: www.google.com
> GET /ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2&ltmpl=gp&passive=true HTTP/1.1
> Host: accounts.google.com
Answered By: yaegashi

curl -v can show all the URLs in an HTTP redirect chain:

$ curl -v -L https://go.usa.gov/W3H 2>&1 | grep -i "^< location:"
< location: http://hurricanes.gov/nhc_storms.shtml
< Location: https://www.hurricanes.gov/nhc_storms.shtml
< location: https://www.nhc.noaa.gov:443/nhc_storms.shtml
< location: http://www.nhc.noaa.gov/cyclones
< Location: https://www.nhc.noaa.gov/cyclones
< location: http://www.nhc.noaa.gov/cyclones/
< Location: https://www.nhc.noaa.gov/cyclones/
Answered By: Kyle Rogers

To show all of the URLs in a redirect chain, including the first:

wget -S --spider https://rb.gy/x7cg8r 2>&1 
 | grep -oP '^--[[:digit:]: -]{19}--  K.*'

Result (tested on Fedora Linux):

https://rb.gy/x7cg8r
https://t.co/BAvVoPyqNr
https://unix.stackexchange.com/

wget options used:

-S
--server-response

    Print the headers sent by HTTP servers and responses sent by FTP servers.

--spider

    When invoked with this option, Wget will behave as a Web spider, which
    means that it will not download the pages, just check that they are there
    ...

Source: https://www.mankier.com/1/wget

The combination of -S and --spider causes wget to issue HEAD requests instead of GET requests.

GNU grep options used:

-o
--only-matching

    Print only the matched (non-empty) parts of a matching line, with each such
    part on a separate output line.

-P
--perl-regexp

    Interpret PATTERNS as Perl-compatible regular expressions (PCREs).

Source: https://www.mankier.com/1/grep

The lines we are interested in look like this:

--2021-12-07 12:29:25--  https://rb.gy/x7cg8r

You see that the timestamp consists of 19 characters comprised of digits, hyphens, colons, and spaces. It is therefore matched by [[:digit:]-: ]{19}, where we used a fixed quantifier of 19.

The K resets the start of the matched portion.

Swap grep with sed

The grep pipeline stage may be replaced with sed, if you prefer:

wget -S --spider https://rb.gy/x7cg8r 2>&1 
 | sed -En 's/^--[[:digit:]: -]{19}--  (.*)/1/p'

Compared to the curl-based solution…

The curl-based solution omits the first url in the redirect chain:

$ curl -v -L https://rb.gy/x7cg8r 2>&1 | grep -i "^< location:"
< Location: https://t.co/BAvVoPyqNr
< location: https://unix.stackexchange.com/

Furthermore, it incurs a 4354.99% increase in bytes sent to second pipeline stage:

$ wget -S --spider https://rb.gy/x7cg8r 2>&1 | wc -c
2728

$ curl -v -L https://rb.gy/x7cg8r 2>&1 | wc -c
121532

$ awk 'BEGIN {printf "%.2fn", (121532-2728)/2728*100}'
4354.99

In my benchmarking, the wget solution was slightly (4%) faster than the curl-based solution.

Update: See my curl-based answer for fastest solution.

Answered By: Robin A. Meade

A proper curl-based solution

url=https://rb.gy/x7cg8r
while redirect_url=$(
  curl -I -s -S -f -w "%{redirect_url}n" -o /dev/null "$url"
); do
  echo "$url"
  url=$redirect_url
  [[ -z "$url" ]] && break
done

Result:

https://rb.gy/x7cg8r
https://t.co/BAvVoPyqNr
https://unix.stackexchange.com/

This is 12% faster than my wget-based solution.

Benchmark details

cd "$(mktemp -d)"

cat <<'EOF' >curl-based-solution
#!/bin/bash
url=https://rb.gy/x7cg8r
while redirect_url=$(
  curl -I -s -S -f -w "%{redirect_url}n" -o /dev/null "$url"
); do
  echo "$url"
  url=$redirect_url
  [[ -z "$url" ]] && break
done
EOF
chmod +x curl-based-solution

cat <<'EOF' >wget-based-solution
#!/bin/bash
url=https://rb.gy/x7cg8r
wget -S --spider "$url" 2>&1 
 | grep -oP '^--[[:digit:]: -]{19}--  K.*'
EOF
chmod +x wget-based-solution

hyperfine --warmup 5 ./wget-based-solution ./curl-based-solution
$ hyperfine --warmup 5 ./wget-based-solution ./curl-based-solution
Benchmark #1: ./wget-based-solution
  Time (mean ± σ):      1.397 s ±  0.025 s    [User: 90.3 ms, System: 19.7 ms]
  Range (min … max):    1.365 s …  1.456 s    10 runs
 
Benchmark #2: ./curl-based-solution
  Time (mean ± σ):      1.250 s ±  0.015 s    [User: 72.4 ms, System: 23.4 ms]
  Range (min … max):    1.229 s …  1.277 s    10 runs
 
Summary
  './curl-based-solution' ran
    1.12 ± 0.02 times faster than './wget-based-solution'
Answered By: Robin A. Meade
Categories: Answers Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.