What is the difference between "sort -u" and "sort | uniq"?
Everywhere I see someone needing to get a sorted, unique list, they always pipe to
sort | uniq. I’ve never seen any examples where someone uses
sort -u instead. Why not? What’s the difference, and why is it better to use uniq than the unique flag to sort?
sort | uniq existed before
sort -u, and is compatible with a wider range of systems, although almost all modern systems do support
-u — it’s POSIX. It’s mostly a throwback to the days when
sort -u didn’t exist (and people don’t tend to change their methods if the way that they know continues to work, just look at
The two were likely merged because removing duplicates within a file requires sorting (at least, in the standard case), and is an extremely common use case of sort. It is also faster internally as a result of being able to do both operations at the same time (and due to the fact that it doesn’t require IPC (Inter-process communication) between
sort). Especially if the file is big,
sort -u will likely use fewer intermediate files to sort the data.
On my system I consistently get results like this:
$ dd if=/dev/urandom of=/dev/shm/file bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 8.95208 s, 11.7 MB/s $ time sort -u /dev/shm/file >/dev/null real 0m0.500s user 0m0.767s sys 0m0.167s $ time sort /dev/shm/file | uniq >/dev/null real 0m0.772s user 0m1.137s sys 0m0.273s
It also doesn’t mask the return code of
sort, which may be important (in modern shells there are ways to get this, for example,
$PIPESTATUS array, but this wasn’t always true).
One difference is that
uniq has a number of useful additional options, such as skipping fields for comparison and counting the number of repetitions of a value.
-u flag only implements the functionality of the unadorned
With POSIX compliant
uniq is currently not compliant in that regard), there’s a difference in that
sort uses the locale’s collating algorithm to compare strings (will typically use
strcoll() to compare strings) while
uniq checks for byte-value identity (will typically use
That matters for at least two reasons.
In some locales, especially on GNU systems, there are different characters that sort the same. For instance, in the en_US.UTF-8 locale on a GNU system, all the ①②③④⑤⑥⑦⑧⑨⑩… characters² and many others sort the same because their sort order is not defined. The 0123456789 arabic digits sort the same as their Eastern Arabic Indic counterparts (٠١٢٣٤٥٦٧٨٩).
sort -u, ① sorts the same as ② and 0123 the same as ٠١٢٣ so
sort -uwould retain only one of each, while for
-f)), ① is different from ② and 0123 different from ٠١٢٣, so
uniqwould consider all 4 unique.
strcollcan only compare strings of valid characters (the behaviour is undefined as per POSIX when the input has sequences of bytes that don’t form valid characters) while
strcmp()doesn’t care about characters since it only does byte-to-byte comparison. So that’s another reason why
sort -umay not give you all the unique lines if some of them don’t form valid text.
sort|uniq, while still unspecified on non-text input, in practice is more likely to give you unique lines for that reason.
Beside those subtleties, one thing that hasn’t been noted so far is that
uniq compares whole line lexically, while
-u compares based on the sort specification given on the command line.
$ printf '%sn' 'a b' 'a c' | sort -uk 1,1 a b $ printf '%sn' 'a b' 'a c' | sort -k 1,1 | uniq a b a c $ printf '%sn' 0 -0 +0 00 '' | sort -n | uniq 0 -0 +0 00 $ printf '%sn' 0 -0 +0 00 '' | sort -nu 0
¹ Prior versions of the POSIX spec were causing confusion however by listing the
LC_COLLATE variable as one affecting
uniq, that was removed in the 2018 edition and the behaviour clarified following that discussion mentioned above. See the corresponding Austin group bug
² 2019 edit. Those have since been fixed, but over 95% of Unicode code points still have an undefined order as of version 2.30 of the GNU libc. You can test with instead for instance in newer versions
I prefer to use
sort | uniq because when I try to use the
-u (eliminate duplicates) option to remove duplicates involving mixed case strings, it is not that easy to understand the result.
Note: before you can run the examples below, you need to simulate the standard C collating sequence by doing the following:
LC_ALL=C export LC_ALL
For example, if I want to sort a file and remove duplicates, while at the same time, keeping the different cases of strings distinct.
$ cat short #file to sort Pear Pear apple pear Apple $ sort short #normal sort (in normal C collating sequence) Apple #the lower case words are at the end Pear Pear apple pear $ sort -f short #correctly sorts ignoring the C collating order Apple #but duplicates are still there apple Pear Pear pear $ sort -fu short #By adding the -u option to remove duplicates it is apple #difficult to ascertain the logic that sort uses to remove Pear #duplicates(i.e., why did it remove pear instead of Pear?)
This confusion is solved by not using the
-u option to remove duplicates. Using
uniq is more predictable. The below first sorts and ignores the case and then passes it to
uniq to remove the duplicates.
$ sort -f short | uniq Apple apple Pear pear
Another difference I found out today is when sorting based on a delimeter where
sort -u applies the unique flag only on the column that you sort with.
$ cat input.csv 3,World,1 1,Hello,1 2,Hello,1 $ cat input.csv | sort -t',' -k2 -u 1,Hello,1 3,World,1 $ cat input.csv | sort -t',' -k2 | uniq 1,Hello,1 2,Hello,1 3,World,1