How to recursively find the amount stored in directory?

I know you are able to see the byte size of a file when you do a long listing with ll or ls -l. But I want to know how much storage is in a directory including the files within that directory and the subdirectories within there, etc. I don’t want the number of files, but instead the amount of storage those files take up.

So I want to know how much storage is in a certain directory recursively? I’m guessing, if there is a command, that it would be in bytes.

Asked By: Rob Avery IV


Try doing this: (replace dir with the name of your directory)

du -s  dir

That gives the cumulative disk usage (not size) of unique (hards links to the same file are counted only once) files (of any type including directory though in practice only regular and directory file take up disk space).

That’s expressed in 512-byte units with POSIX compliant du implementations (including GNU du when POSIXLY_CORRECT is in the environment), but some du implementations give you kibibytes instead. Use -k to guarantee you get kibibytes.

For the size (not disk usage) in bytes, with the GNU implementation of du or compatible:

du -sb dir

or (still not standard):

du -sh dir

For human readable sizes (disk usage).

man du (link here is for the GNU implementation).

Answered By: Gilles Quénot

In Unix, a directory just contains names and references to filesystem objects (inodes, which can refer to directories, files, or some other exotic things). A file can appear under several names in the same directory, or be listed in several directories. So “space used by the directory and the files inside” really makes no sense, as the files aren’t “inside”.

That said, the command du(1) lists the space used by a directory and all what is reachable through it, du -s gives a summary, with -h some implementations like GNU du give “human readable” output (i.e., kilobyte, megabyte).

Answered By: vonbrand

You just do:

du -sh /path/to/directory

where -s is for summary and -h for human readable (non standard option). Use standard -k instead to get KiB.

Be careful however, (unlike ls) this will not show you file size but disk usage (i.e. a multiple of the filesystem block-size). The file itself may actually be smaller, or even bigger.

So to get the files size, you can use the --apparent-size option:

du -sh --apparent-size /path/to/directory

This is the size that would be transferred over the network if you had to.

Indeed, the file may have "holes" in it (empty shell), may be smaller than the filesystem block-size, may be compressed at the filesystem level, etc. The man page explains this.

As Nicklas points out, you may also use the ncdu disk usage analyser. Launched from within a directory it will show you what folders and files use disk space by ordering them biggest to smallest.

You can see this question as well.

Answered By: Totor

An alternative to the already mentioned du command would be ncdu which is a nice disk usage analyzer for use in terminal. You may need to install it first, but it is available in most of the package repositories.

Edit: For the output format see these screenshots

Answered By: Niklas

Note that if you want to know all {sub}folders size inside a directory, you can also use the -dor --max-depth option of du (which take an argument: the recursive limit)

For instance :

du -h /path/to/directory -d 1

Will show you something like

4.0K /path/to/directory/folder1
16M  /path/to/directory/folder2
2.4G /path/to/directory/folder3
68M  /path/to/directory/folder4
8G   /path/to/directory/folder5

PS: Entering 0 as the recursive limit is equivalent to the -s option.
Those 2 commands will give you the same result (your given directory recursive human readable size):

du -h /path/to/directory -d 0
du -sh /path/to/directory
Answered By: Flo Schild

For me it worked backwards in the case of the depth and the path on a OS X El Capitán

du -h -d 1 /path/to/directory
Answered By: GZepeda

You can use “” from the awk Velour library:

ls -ARgo "$@" | awk '{q += $3} END {print q}'
Answered By: Zombo

This will give you a list of sizes from current directory, including folders(recursive) and files.

$ du -hs *
7.5M    Applications
9.7M    Desktop
 85M    Documents
 12G    Google Drive
 52G    Library
342M    Movies
8.3M    Music
780M    Pictures
8.5G    Projects
8.0K    Public
 16K    client1.txt
Answered By: Simon Liu

I like the following approach:

du -schx .[!.]* * | sort -h


  • s: display only a total for each argument
  • c: produce a grand total
  • h: print sizes in a human-readable format
  • x: skip directories on different file systems
  • .[!.]* *: Summarize disk usage of each file, recursively for directories (including "hidden" ones)
  • | sort -h: Sort based on human-readable numbers (e.g., 2K 1G)
Answered By: Eduardo Baitello

This works:

To get the size of each directory under current directory.

du -h --max-depth=1 .

In general:

du -h --max-depth=1 <dirpath>
Answered By: user3303020

This is the best for me:

find . -type d -exec du -sk "{}" ;

You will get all the dirs recursively with at the top the root dir size:

588591456   ./photo
2171676 ./photo/2004
163916  ./photo/2004/AAA
114252  ./photo/2004/BBB
49660   ./photo/2004/CCC
7238148 ./photo/2005
184 ./photo/2005/.thumbcache
33592   ./photo/2005/AAA
228 ./photo/2005/BBB

Answered By: Zioalex

ncdu (ncurses du)

ncdu was previously mentioned at but I think that incredible tool needs deserves a longer description.

This awesome CLI utility allows you to easily find the large files and directories (recursive total size) interactively.

For example, from inside the root of a well known open source project we do:

sudo apt install ncdu

The outcome its:

enter image description here

Then, I enter down and right on my keyboard to go into the /drivers folder, and I see:

enter image description here

ncdu only calculates file sizes recursively once at startup for the entire tree, so it is efficient. This way don’t have to recalculate sizes as you move inside subdirectories as you try to determine what the disk hog is.

"Total disk usage" vs "Apparent size" is analogous to du, and I have explained it at:

Project homepage:

Related questions:

Tested in Ubuntu 16.04.

Ubuntu list root

You likely want:

ncdu --exclude-kernfs -x /


  • -x stops crossing of filesystem barriers
  • --exclude-kernfs skips special filesystems like /sys

MacOS 10.15.5 list root

To properly list root / on that system, I also needed --exclude-firmlinks, e.g.:

brew install ncdu
cd /
ncdu --exclude-firmlinks

otherwise it seemed to go into some link infinite loop, likely due to:

The things we learn for love.

ncdu non-interactive usage

Another cool feature of ncdu is that you can first dump the sizes in a JSON format, and later reuse them.

For example, to generate the file run:

ncdu -o ncdu.json

and then examine it interactively with:

ncdu -f ncdu.json

This is very useful if you are dealing with a very large and slow filesystem like NFS.

This way, you can first export only once, which can take hours, and then explore the files, quit, explore again, etc.

The output format is just JSON, so it is easy to reuse it with other programs as well, e.g.:

ncdu -o -  | python -m json.tool | less

reveals a simple directory tree data structure:

        "progname": "ncdu",
        "progver": "1.12",
        "timestamp": 1562151680
            "asize": 4096,
            "dev": 2065,
            "dsize": 4096,
            "ino": 9838037,
            "name": "/work/linux-kernel-module-cheat/submodules/linux"
            "asize": 1513,
            "dsize": 4096,
            "ino": 9856660,
            "name": "Kbuild"
                "asize": 4096,
                "dsize": 4096,
                "ino": 10101519,
                "name": "net"
                    "asize": 4096,
                    "dsize": 4096,
                    "ino": 11417591,
                    "name": "l2tp"
                    "asize": 48173,
                    "dsize": 49152,
                    "ino": 11418744,
                    "name": "l2tp_core.c"

Tested in Ubuntu 18.04.

To find the total size of the files contained in a folder recursively, omitting symlinks, directory size and implied . and .., I customized the Zombo answer above:

ls -ARgo "$@" | awk '{if ($1 ~ /^-/) {q += $3}} END {print q}'

I needed this to check the upload of the local storage of a web application to a blob storage (Azure) comparing files size in bytes of the remote and the local directories (the Azure blob storage in use don’t store file in directories, so I needed to sum just the file size).

To do this, it sums just ls size column rows starting with - character, so:

ls -ARgo list recursively (R) the content of a directory in byte size, omitting implied . and .. (A), without listing owner (g) and groups (o) columns.

~/Scrivania/my_folder$ ls -ARgo
total 1020
-rw-rw-r-- 1 894543 gen  9 09:53 photo.png
-rw-rw-r-- 1 141318 feb  1 09:28 ryxbb3kkit1nfnxwzu7i.webp
drwxrwxr-x 2   4096 feb  1 11:52 sub_folder

total 864
-rw-rw-r-- 1 137859 gen 13 10:26  186_20230106_corsi_Arogis.pdf
-rw-rw-r-- 1 257591 ott 20 12:49 '2010-03-27 - Piano Formativo SNaTSS-1-1.pdf'
-rw-rw-r-- 1 484746 ott 19 16:02  CNCOyXtu.html

The awk function sums the third column ($3) of each resulting ls row, if the first column ($1) matches a dash -.

It seems to be enough for my purpose, but, be careful that:

  • this is not properly disk usage, no folders or system functional files computed
  • it will not follow symlinks (need to change the regex inside awk)
Answered By: Gianpaolo Scrigna

Including various parts of the other provided answers, here is my suggested command:

cd /path/to/directory/of/interest
sudo du -hsc *

This will list all the directories and the recursive size

Answered By: cdahms
Categories: Answers Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.