How can we run a command stored in a variable?

$ ls -l /tmp/test/my dir/
total 0

I was wondering why the following ways to run the above command fail or succeed?

$ abc='ls -l "/tmp/test/my dir"'

$ $abc
ls: cannot access '"/tmp/test/my': No such file or directory
ls: cannot access 'dir"': No such file or directory

$ "$abc"
bash: ls -l "/tmp/test/my dir": No such file or directory

$ bash -c $abc
'my dir'

$ bash -c "$abc"
total 0

$ eval $abc
total 0

$ eval "$abc"
total 0
Asked By: Tim


This has been discussed in a number of questions on unix.SE, I’ll try to collect all issues I can come up with here. Below is

  • a description of why and how the various attempts fail,
  • a way to do it properly with a function (for a fixed command), or
  • with shell arrays (Bash/ksh/zsh) or the $@ pseudo-array (POSIX sh), both of which also allow building the command line pieces, if you e.g. only need to vary some optoins
  • and notes about using eval to do this.

Some references at the end.

For the purposes here, it doesn’t matter much if it’s only the command arguments or also the command name that is to be stored in a variable. They’re processed similarly up to the point where the command is launched, at which point the shell just takes the first word as the name of the command to run.

Why it fails

The reason you face those problems is the fact that word splitting is quite simple and doesn’t lend itself to complex cases, and the fact that quotes expanded from variables don’t act as quotes, but are just ordinary characters.

(Note that the part about quotes is similar to every other programming language: e.g. char *s = "foo()"; printf("%sn", s) does not call the function foo() in C, but just prints the string foo(). That’s different in macro processors, like m4, the C preprocessor, or Make (to some extent). The shell is a programming language, not a macro processor.)

On Unix-like systems, it’s the shell that processes quotes and variable expansions on the command line, turning it from a single string into the list of strings that the underlying system call passes to the launched command. The program itself doesn’t see the quotes the shell processed. E.g. if given the command ls -l "foo bar", the shell turns that into the three strings ls, -l and foo bar (removing the quotes), and passes those to ls. (Even the command name is passed, though not all programs use it.)

The cases presented in the question:

The assignment here assigns the single string ls -l "/tmp/test/my dir" to abc:

$ abc='ls -l "/tmp/test/my dir"'

Below, $abc is split on whitespace, and ls gets the three arguments -l, "/tmp/test/my and dir". The quotes here are just data, so there’s one at the front of the second argument and another at the back of the third. The option works, but the path gets incorrectly processed as ls sees the quotes as part of the filenames:

$ $abc
ls: cannot access '"/tmp/test/my': No such file or directory
ls: cannot access 'dir"': No such file or directory

Here, the expansion is quoted, so it’s kept as a single word. The shell tries to find a program literally called ls -l "/tmp/test/my dir", spaces and quotes included.

$ "$abc"
bash: ls -l "/tmp/test/my dir": No such file or directory

And here, $abc is split, and only the first resulting word is taken as the argument to -c, so Bash just runs ls in the current directory. The other words are arguments to bash, and are used to fill $0, $1, etc.

$ bash -c $abc
'my dir'

With bash -c "$abc", and eval "$abc", there’s an additional shell processing step, which does make the quotes work, but also causes all shell expansions to be processed again, so there’s a risk of accidentally running e.g. a command substitution from user-provided data, unless you’re very careful about quoting.

Better ways to do it

The two better ways to store a command are a) use a function instead, b) use an array variable (or the positional parameters).

Using functions:

Simply declare a function with the command inside, and run the function as if it were a command. Expansions in commands within the function are only processed when the command runs, not when it’s defined, and you don’t need to quote the individual commands. Though this really only helps if you have a fixed command you need to store (or more than one fixed command).

# define it
myls() {
    ls -l "/tmp/test/my dir"

# run it

It’s also possible to define multiple functions and use a variable to store the name of the function you want to run in the end.

Using an array:

Arrays allow creating multi-word variables where the individual words contain white space. Here, the individual words are stored as distinct array elements, and the "${array[@]}" expansion expands each element as separate shell words:

# define the array
mycmd=(ls -l "/tmp/test/my dir")

# expand the array, run the command

The command is written inside the parentheses exactly as it would be written when running the command. The processing the shell does is the same in both cases, just in one it only saves the resulting list of strings, instead of using it to run a program.

The syntax for expanding the array later is slightly horrible, though, and the quotes around it are important.

Arrays also allow you to build the command line piece-by-piece. For example:

mycmd=(ls)               # initial command
if [ "$want_detail" = 1 ]; then
    mycmd+=(-l)          # optional flag, append to array
mycmd+=("$targetdir")    # the filename


or keep parts of the command line constant and use the array fill just a part of it, like options or filenames:

options=(-x -v)
files=(file1 "file name with whitespace")

somecommand "${options[@]}" "${files[@]}" "$target"

(somecommand being a generic placeholder name here, not any real command.)

The downside of arrays is that they’re not a standard feature, so plain POSIX shells (like dash, the default /bin/sh in Debian/Ubuntu) don’t support them (but see below). Bash, ksh and zsh do, however, so it’s likely your system has some shell that supports arrays.

Using "$@"

In shells with no support for named arrays, one can still use the positional parameters (the pseudo-array "$@") to hold the arguments of a command.

The following should be portable script bits that do the equivalent of the code bits in the previous section. The array is replaced with "$@", the list of positional parameters. Setting "$@" is done with set, and the double quotes around "$@" are important (these cause the elements of the list to be individually quoted).

First, simply storing a command with arguments in "$@" and running it:

set -- ls -l "/tmp/test/my dir"

Conditionally setting parts of the command line options for a command:

set -- ls
if [ "$want_detail" = 1 ]; then
    set -- "$@" -l
set -- "$@" "$targetdir"


Only using "$@" for options and operands:

set -- -x -v
set -- "$@" file1 "file name with whitespace"
set -- "$@" /somedir

somecommand "$@"

Of course, "$@" is usually filled with the arguments to the script itself, so you’ll have to save them somewhere before re-purposing "$@".

To conditionally pass a single argument, you can also use the alternate value expansion ${var:+word} with some careful quoting. Here, we include -f and the filename only if the filename is nonempty:

file="foo bar"
somecommand ${file:+-f "$file"}

Using eval (be careful here!)

eval takes a string and runs it as a command, just like if it was entered on the shell command line. This includes all quote and expansion processing, which is both useful and dangerous.

In the simple case, it allows doing just what we want:

cmd='ls -l "/tmp/test/my dir"'
eval "$cmd"

With eval, the quotes are processed, so ls eventually sees just the two arguments -l and /tmp/test/my dir, like we want. eval is also smart enough to concatenate any arguments it gets, so eval $cmd could also work in some cases, but e.g. all runs of whitespace would be changed to single spaces. It’s still better to quote the variable there as that will ensure it gets unmodified to eval.

However, it’s dangerous to include user input in the command string to eval. For example, this seems to work:

read -r filename
cmd="ls -ld '$filename'"
eval "$cmd";

But if the user gives input that contains single quotes, they can break out of the quoting and run arbitrary commands! E.g. with the input '$(whatever)'.txt, your script happily runs the command substitution. That it could have been rm -rf (or worse) instead.

The issue there is that the value of $filename was embedded in the command line that eval runs. It was expanded before eval, which saw e.g. the command ls -l ''$(whatever)'.txt'. You would need to pre-process the input to be safe.

If we do it the other way, keeping the filename in the variable, and letting the eval command expand it, it’s safer again:

read -r filename
cmd='ls -ld "$filename"'
eval "$cmd";

Note the outer quotes are now single quotes, so expansions within do not happen. Hence, eval sees the command ls -l "$filename" and expands the filename safely itself.

But that’s not much different from just storing the command in a function or an array. With functions or arrays, there is no such problem since the words are kept separate for the whole time, and there’s no quote or other processing for the contents of filename.

read -r filename
cmd=(ls -ld -- "$filename")

Pretty much the only reason to use eval is one where the varying part involves shell syntax elements that can’t be brought in via variables (pipelines, redirections, etc.). However, you’ll then need to quote/escape everything else on the command line that needs protection from the additional parsing step (see link below). In any case, it’s best to avoid embedding input from the user in the eval command!


Answered By: ilkkachu

The second quote sign break the command.

When I run:

abc="ls -l '/home/wattana/Desktop'"

It gave me an error.

But when I run

abc="ls -l /home/wattana/Desktop"

There is no error at all

There is no way to fix this at the time(for me) but you can avoid the error by not having space in directory name.

This answer said the eval command can be used to fix this but it doesn’t work for me

Answered By: Wattana Gaming

The safest way to run a (non-trivial) command is eval. Then you can write the command as you would do on the command line and it is executed exactly as if you had just entered it. But you have to quote everything.

Simple case:

abc='ls -l "/tmp/test/my dir"'
eval "$abc"

not so simple case:

# command: awk '! a[$0]++ { print "foo: " $0; }' inputfile
abc='awk '''! a[$0]++ { print "foo: " $0; }''' inputfile'
eval "$abc"
Answered By: Hauke Laging

Another trick to run any (trivial/non-trivial) command stored in abc variable is:

$ history -s $abc

and press UpArrow or Ctrl-p to bring it in the command line. Unlike any other method this way you can edit it before execution if needed.

This command will append the variable’s content as a new entry to the Bash history and you can recall it by UpArrow.

In combinaison with another command to replay the last listed history command you can replay it without pressing a key.

 $ fc -e : -1
Answered By: bloody

If it needs an array to execute, make it into an array!

IFS=' ' read -r -a command_arr <<< "${command}"

the first line converts the string into an array. The second line executes the command.

This does not appear to work with chained commands, e.g. using && or ;.

Answered By: ingyhere

Although @ilkkachu referenced bash’s word splitting, I think it would be good to explicitly point out the importance of the IFS shell variable.
For example in bash:

my_command=$'lsx1a-lx1a-ax1a/tmp/test/my dir'

would run the command stored in my_command as expected. x1a is CTRL-Z from ASCII and a good delimiter choice. This works as long as the command the be executed does not contain any CTRL+Z character, which is arguably more likely than with whitespace. I also said bash, since ANSI-C style quoting $’…’ is not POSIX as of now.

This technique works either when you have a hard coded command or when you are constructing a command. Just don’t forget to reset IFS to its previous value.

Answered By: timowhatsoever
Categories: Answers Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.