How to detect if a process runs longer than X seconds, then do something with it?
I’m on debian.
I am calling a ffmpeg process to generate a mp3. this gets called from a php script using shell_exec. This works fine 99% of the time.
Sometimes, the ffmpeg process doesn’t exit and I’m left with ffmpeg running for hours. I”m slowly tweaking the params and its happening less, but still crops up on occasion.
When I look at the top processes I sometimes find it sitting there eating the cpu and disk, the process hasn’t terminated.
993 www-data 20 0 252012 38904 27384 R 99.7 1.9 390:09.84 ffmpeg
I normally look for the process (and confirm it’s the right one by ensuring the params it executed with match my php script:
ps -eo args | grep ffmpeg
then get its process id and kill it, and go hunt down the file it was working on and trash that too
-rw-r--r-- 1 www-data www-data 14G Feb 9 21:20 cfcd208495d565ef66e7dff9f98764da.mp3 – uh oh
I’m not sure what words I should be googling for.
I’m looking for ideas for a supervisory process or script that I can run through a supervisord/cron job that can output all processes that run for longer than X seconds, and pipe their process details into a script that can match the process with arguments matching a pattern (using some kind of nasty regex I imagine), kill the process and go trash any files matching on their arguments.
You can use timeout that runs a command with a time limit.
timeout 2 yes
Will echo ‘y’ for two seconds (it would go on forever otherwise).