How to detect and clean up junk journal files?

One of our Ubuntu 18.04 hosts was caught with 12 GB of *.journal files, far more than intended. Attempting to find out if they were worth keeping, I ran

journalctl --file $f

on each file older than today; which always resulted in either Failed to open files or --- No entries ---.

Am I correct to conclude that such files are junk and can be discarded?

If they are, why do they exist? What is a supported way to clean them up? Is it worthwhile to regularly check systems for their existence?

Asked By: reinierpost

||

First of all Journal is a logging system and is part of systemd. Their existence is crucial when you need to know what happened.

As mentioned here, journalctl --file isn’t that usable.

As the journal files are rotated periodically, this form is not really usable for viewing complete journals.

Now, whether you consider the files useless, that’s for you to decide. Normally, too old logs are not worth keeping and you could delete them.

To do that, is best to use journalctl itself and its utility vacuum. For instance you can use

sudo journalctl --vacuum-time=3weeks

to delete all journal files that are more than 3 weeks old.

For more info check the man page with man journalctl.

–vacuum-size=, –vacuum-time=, –vacuum-files=

Removes the oldest archived journal files until the disk space they
use falls below the specified size (specified with the usual "K", "M",
"G" and "T" suffixes), or all archived journal files contain no data
older than the specified timespan (specified with the usual "s", "m",
"h", "days", "months", "weeks" and "years" suffixes), or no more than
the specified number of separate journal files remain. Note that
running –vacuum-size= has only an indirect effect on the output shown
by –disk-usage, as the latter includes active journal files, while
the vacuuming operation only operates on archived journal files.
Similarly, –vacuum-files= might not actually reduce the number of
journal files to below the specified number, as it will not remove
active journal files.

Also, I don’t believe its worthwhile to periodically check this. Best thing you can do is set an upper limit by uncommenting and changing the following in /etc/systemd/journald.conf.

For example:

SystemMaxUse=4G

Then restart the service. sudo systemctl restart systemd-journald.

Use man journald.conf for more information.


Edit:

As explained by @reinierpost

This question is not about regular old logs, it is about old logfiles
that do not appear to contain any logs at all (but they still occupy 8
MB each).

Try running journalctl --verify. If files don’t pass then the journal is corrupted and you should restart the service.

sudo systemctl restart systemd-journald

That should fix the problem for logs going forward.

As for why this happened in the first place, I don’t know and its not easy to figure out. And yes, corrupted files are probably junk. You could try this for a clean slate.

Answered By: Rayleigh

I ran into something similar. On one of my Ubuntu 22.04 machines I had 81G of files in /var/log/journal going back 3 years. Actual log data shown by journalctl --utc --no-pager | head goes back about 6 days. My journald.conf file used all default (sane) values. I started checking other VMs and found similar issues.

I tried:

  • Set SystemMaxUse=10G and restart systemd-journald
  • Set RuntimeMaxUse=10G and restart systemd-journald
  • Set SystemMaxFileSize=1G and SystemMaxFiles=30 and restart systemd-journald
  • Set MaxFileSec=1month and restart systemd-journald

None of these settings removed any of the old log files from /var/log/journal. I also tried:

  • journalctl --flush --rotate
  • journalctl --rotate --vacuum-time=10days

All of the old files were still there.

journalctl --disk-usage claims that journald is only using 496M.

journalctl --verify showed every check as "passed" but only checked a single subdirectory of /var/log/journal. I assume that all of the other directories were orphaned by journald but they go right up to the current date, so this is an on-going issue.

I ended up working around the problem by setting MaxFileSec=1month (which is supposed to be the default setting) and adding a daily cron job with find /var/log/journal -mtime +45 -delete in it to delete any junk that journald is supposed to delete but doesn’t.

Answered By: Earl Ruby
Categories: Answers Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.