Solving "mv: Argument list too long"?

I have a folder with more than a million files that needs sorting, but I can’t really do anything because mv outputs this message all the time

-bash: /bin/mv: Argument list too long

I’m using this command to move extension-less files:

mv -- !(*.jpg|*.png|*.bmp) targetdir/
Asked By: Dominique

||

xargs is the tool for the job. That, or find with -exec … {} +. These tools run a command several times, with as many arguments as can be passed in one go.

Both methods are easier to carry out when the variable argument list is at the end, which isn’t the case here: the final argument to mv is the destination. With GNU utilities (i.e. on non-embedded Linux or Cygwin), the -t option to mv is useful, to pass the destination first.

If the file names have no whitespace nor any of "' and don’t start with -¹, then you can simply provide the file names as input to xargs (the echo command is a bash builtin, so it isn’t subject to the command line length limit; if you see !: event not found, you need to enable globbing syntax with shopt -s extglob):

echo !(*.jpg|*.png|*.bmp) | xargs mv -t targetdir --

You can use the -0 option to xargs to use null-delimited input instead of the default quoted format.

printf '%s' !(*.jpg|*.png|*.bmp) | xargs -0 mv -t targetdir --

Alternatively, you can generate the list of file names with find. To avoid recursing into subdirectories, use -type d -prune. Since no action is specified for the listed image files, only the other files are moved.

find . -name . -o -type d -prune -o 
       -name '*.jpg' -o -name '*.png' -o -name '*.bmp' -o 
       -exec mv -t targetdir/ {} +

(This includes dot files, unlike the shell wildcard methods.)

If you don’t have GNU utilities, you can use an intermediate shell to get the arguments in the right order. This method works on all POSIX systems.

find . -name . -o -type d -prune -o 
       -name '*.jpg' -o -name '*.png' -o -name '*.bmp' -o 
       -exec sh -c 'mv "$@" "$0"' targetdir/ {} +

In zsh, you can load the mv builtin:

setopt extended_glob
zmodload zsh/files
mv -- ^*.(jpg|png|bmp) targetdir/

or if you prefer to let mv and other names keep referring to the external commands:

setopt extended_glob
zmodload -Fm zsh/files b:zf_*
zf_mv -- ^*.(jpg|png|bmp) targetdir/

or with ksh-style globs:

setopt ksh_glob
zmodload -Fm zsh/files b:zf_*
zf_mv -- !(*.jpg|*.png|*.bmp) targetdir/

Alternatively, using GNU mv and zargs:

autoload -U zargs
setopt extended_glob
zargs -- ./^*.(jpg|png|bmp) -- mv -t targetdir/ --

¹ with some xargs implementations, file names must also be valid text in the current locale. Some would also consider a file named _ as indicating the end of input (can be avoided with -E '')

The operating system’s argument passing limit does not apply to expansions which happen within the shell interpreter. So in addition to using xargs or find, we can simply use a shell loop to break up the processing into individual mv commands:

for x in *; do case "$x" in *.jpg|*.png|*.bmp) ;; *) mv -- "$x" target ;; esac ; done

This uses only POSIX Shell Command Language features and utilities. This one-liner is clearer with indentation, with unnecessary semicolons removed:

for x in *; do
  case "$x" in
    *.jpg|*.png|*.bmp) 
       ;; # nothing
    *) # catch-all case
       mv -- "$x" target
       ;;
  esac
done
Answered By: Kaz

For a more aggressive solution than those previously offered, pull up your kernel source and edit include/linux/binfmts.h

Increase the size of MAX_ARG_PAGES to something larger than 32. This increases the amount of memory the kernel will allow for program arguments, thereby allowing you to specify your mv or rm command for a million files or whatever you’re doing. Recompile, install, reboot.

BEWARE! If you set this too large for your system memory, and then run a command with a lot of arguments BAD THINGS WILL HAPPEN! Be extremely cautious doing this to multi-user systems, it makes it easier for malicious users to use up all your memory!

If you don’t know how to recompile and reinstall your kernel manually, it’s probably best that you just pretend this answer doesn’t exist for now.

Answered By: Perkins

A more simple solution using "$origin"/!(*.jpg|*.png|*.bmp) instead of a catch block:

for file in "$origin"/!(*.jpg|*.png|*.bmp); do mv -- "$file" "$destination" ; done

Thanks to @Score_Under

For a multi-line script you can do the following (notice the ; before the done is dropped):

for file in "$origin"/!(*.jpg|*.png|*.bmp); do        # don't copy types *.jpg|*.png|*.bmp
    mv -- "$file" "$destination" 
done 

To do a more generalized solution that moves all files, you can do the one-liner:

for file in "$origin"/*; do mv -- "$file" "$destination" ; done

Which looks like this if you do indentation:

for file in "$origin"/*; do
    mv -- "$file" "$destination"
done 

This takes every file in the origin and moves them one by one to the destination. The quotes around $file are necessary in case there are spaces or other special characters in the filenames.

Here is an example of this method that worked perfectly

for file in "/Users/william/Pictures/export_folder_111210/"*.jpg; do
    mv -- "$file" "/Users/william/Desktop/southland/landingphotos/";
done
Answered By: Whitecat

You can get around that restriction while still using mv if you don’t mind running it a couple times.

You can move portions at a time. Let’s say for example you had a long list of alphanumeric file names.

mv ./subdir/a* ./

That works. Then knock out another big chunk. After a couple moves, you can just go back to using mv ./subdir/* ./

Answered By: Kristian

If working with Linux kernel is enough you can simply do

ulimit -S -s unlimited

That will work because Linux kernel included a patch around 10 years ago that changed argument limit to be based on stack size: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b6a2fea39318e43fee84fa7b0b90d68bed92d2ba

If you don’t want unlimited stack space, you can say e.g.

ulimit -S -s 100000

to limit the stack to 100MB. Note that you need to set stack space to normal stack usage (usually 8 MB) plus the size of the command line you would want to use.

You can query actual limit as follows:

getconf ARG_MAX

that will output the maximum command line length in bytes. For example, Ubuntu defaults set this to 2097152 which means roughly 2 MB. If I run with unlimited stack I get 4611686018427387903 which is exactly 2^62 or about 46000 TB. If your command line exceeds that, I expect you to be able to workaround the issue by yourself.

Note that if you use sudo as in sudo mv *.dat somewhere/. running ulimit cannot fix that issue because sudo resets the stack size before executing the mv for real. To workaround that, you have to start root shell with sudo -s, then run ulimit -S -s unlimited and finally run the command without sudo in that root shell.

Answered By: Mikko Rantalainen

Sometimes it’s easiest to just write a little script, e.g. in Python:

import glob, shutil

for i in glob.glob('*.jpg'):
  shutil.move(i, 'new_dir/' + i)
Answered By: duhaime

Here is my two cents, append this into .bash_profile

mv() {
  if [[ -d $1 ]]; then #directory mv
    /bin/mv $1 $2
  elif [[ -f $1 ]]; then #file mv
    /bin/mv $1 $2
  else
    for f in $1
    do
      source_path=$f
      #echo $source_path
      source_file=${source_path##*/}
      #echo $source_file
      destination_path=${2%/} #get rid of trailing forward slash

      echo "Moving $f to $destination_path/$source_file"

      /bin/mv $f $destination_path/$source_file
    done
  fi
}
export -f mv

Usage

mv '*.jpg' ./destination/
mv '/path/*' ./destination/
Answered By: Ako

Try this:

find currentdir -name '*.*' -exec mv {} targetdir ;
  • find: search a folder
  • -name: match a desired criteria
  • -exec: run the command that follows
  • {}: insert the filename found
  • ;: mark the end of the exec command
Answered By: tsveti_iko
Categories: Answers Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.