Appending timestamp stamp along with log file lines

I have a log file and I need to append timestamps to each line as they get added. So I’m looking for script that appends timestamps to each entry in log lines and which could run as a cron job.

Asked By: Kratos


The General Way

$ cat input.log | sed -e "s/^/$(date -R) /" >> output.log

How it works:

  1. cat reads file called input.log and just prints it to its standard output stream.

    Normally the standard output is connected to a terminal, but this little script contains | so shell redirects the standard output of cat to standard input of sed.

  2. sed reads data (as cat produces it), processes it (according to the script provided with -e option) and then prints it to its standard output. The script "s/^/$(date -R) /" means replace every start of line to a text generated by date -R command (the general construction for replace command is: s/pattern/replace/).

  3. Then according to >> bash redirects the output of sed to a file called output.log (> means replace file contents and >> means append to the end).

The problem is the $(date -R) evaluated once when you run the script so it will insert current timestamp to the beginning of each line. The current timestamp may be far from a moment when a message was generated. To avoid it you have to process messages as they are written to the file, not with a cron job.


The standard stream redirection described above called pipe. You can redirect it not just with | between commands in the script, but through a FIFO file (aka named pipe). One program will write to the file and another will read data and receive it as the first sends.

Pick an example:

$ mkfifo foo.log.fifo
$ while true; do cat foo.log.fifo | sed -e "s/^/$(date -R) /" >> foo.log; done;

# have to open a second terminal at this point
$ echo "foo" > foo.log.fifo
$ echo "bar" > foo.log.fifo
$ echo "baz" > foo.log.fifo

$ cat foo.log      

Tue, 20 Nov 2012 15:32:56 +0400 foo
Tue, 20 Nov 2012 15:33:27 +0400 bar
Tue, 20 Nov 2012 15:33:30 +0400 baz

How it works:

  1. mkfifo creates a named pipe

  2. while true; do sed ... ; done runs an infinite loop and at every iteration it runs sed with redirecting foo.log.fifo to its standard input; sed blocks in waiting for input data and then processes a received message and prints it to standard output redirected to foo.log.

    At this point you have to open a new terminal window because the loop occupies the current terminal.

  3. echo ... > foo.log.fifo prints a message to its standard output redirected to the fifo file and sed receives it and processes and writes to a regular file.

The important note is the fifo just as any other pipe has no sense if one of its sides is not connected to any process. If you try to write to a pipe the current process will block until someone would read data on the other side of the pipe. If you want to read from a pipe the process will block until someone will write data to the pipe. The sed loop in the example above does nothing (sleeps) until you do echo.

For your particular situation you just configure your application to write log messages to the fifo file. If you can’t configure it – simply delete the original log file and create a fifo file. But note again that if the sed loop will die for some reason – your program will be blocked upon attempting to write to the file until someone will read from the fifo.

The benefit is the current timestamp evaluated and attached to a message as the program writes it to the file.

Asynchronous Processing With tailf

To make writing to the log and processing more independent you can use two regular files with tailf. An application will write message to a raw file and other process read new lines (follow to writes asynchronously) and process data with writing to the second file.

Let’s take an example:

# will occupy current shell
$ tailf -n0 bar.raw.log | while read line; do echo "$(date -R) $line" >> bar.log; done;

$ echo "foo" >> bar.raw.log
$ echo "bar" >> bar.raw.log
$ echo "baz" >> bar.raw.log

$ cat bar.log

Wed, 21 Nov 2012 16:15:33 +0400 foo
Wed, 21 Nov 2012 16:15:36 +0400 bar
Wed, 21 Nov 2012 16:15:39 +0400 baz

How it works:

  1. Run tailf process that will follow writes to bar.raw.log and print them to standard output redirected to the infinite while read ... echo loop. This loop performs two actions: read data from standard input to a buffer variable called line and then write generated timestamp with the following buffered data to the bar.log.

  2. Write some messages to the bar.raw.log. You have to do this in a separate terminal window because the first one will be occupied by tailf which will follow the writes and do its job. Quite simple.

The pros is your application would not block if you kill tailf. The cons is less accurate timestamps and duplicating log files.

Answered By: Dmitry

You could use the ts perl script from moreutils:

$ echo test | ts %F-%H:%M:%.S
2012-11-20-13:34:10.731562 test
Answered By: Stéphane Chazelas

I used ts this way to get an entry with a time stamp in an error log for a script I use to get Cacti filled with statistics of a remote host.

To test Cacti I use rand to add some random values which I use for temperature graphs to monitor my systems temperature. is a script that collects system temperature statistics of my PC and sends that to a Raspberry Pi on which Cacti runs. Some time ago, the network was stuck. I only got SSH time outs in my error log. Unfortunately, no time entries in that log. I didn’t know how to add a time stamp to a log entry. So, after some searches on the Internet, I stumbled upon this post and this is what I made using ts.

To test it, I used an unknown option to rand. Which gave an error to stderr. To capture it, I redirect it to a temporary file. Then I use cat to show the contents of the file and pipe it to ts, add a time format which I found on this post and finally log it to the error file. Then I clear the contents of the temporary file, otherwise I get double entries for the same error.


* * * * * /home/monusr/bin/ 1>> /home/monusr/pushmonstats.log 2> /home/monusr/.err;/bin/cat /home/monusr/.err|/usr/bin/ts %F-%H:%M:%.S 1>> /home/monusr/pushmonstats.err;> /home/monusr/.err

This gives the following in my error log:

2014-03-22-19:17:53.823720 rand: unknown option -- '-l'

Maybe this is not a very elegant way to do it, but it works. I wonder if there is a more elegant approach to it.

Answered By: Frank

Modified from Dmitry Vasilyanov’s answer.

In a bash script, you can redirect and wrap output with timestamps line by line on the fly.

When to use:

  • For bash script jobs, insert the line before main script
  • For non-script jobs, create a script to call the program.
  • For services control by system, it’s better to use tailf for the log file as Dmitry Vasilyanov said.

Here’s an example named

exec &> >(while read line; do echo "$(date +'%h %d %H:%M:%S') $line" >> foo.log; done;)

echo "foo"
sleep 1
echo "bar" >&2
sleep 1
echo "foobar"

And the result:

$ bash
$ cat foo.log
May 12 20:04:11 foo
May 12 20:04:12 bar
May 12 20:04:13 foobar

How it works

  1. exec &> Redirect stdout and stderr to same place
  2. >( ... ) pipe outputs to an asynchronous inner command
  3. The rest works as Dmitry Vasilyanov explained.

For example:

  • pipe timestamp and log to file

    exec &> >(while read line; do echo "$(date +'%h %d %H:%M:%S') $line" >> foo.log; done;)
    echo "some script commands"
  • Or print timestamp and log to stdout

    exec &> >(while read line; do echo "$(date +'%h %d %H:%M:%S') $line"; done;)
    echo "some script commands"

    then save them in /etc/crontab setting

    * * * * * root /path-to-script/ >> /path-to-log-file/foo.log
Answered By: Jethro Yu
Categories: Answers Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.