Here is the situation I just had:
I recently updated the cache used by my wcdl shell program, in order to get the latest wallpapers from an awesome wallpaper website. I had updated my collection of wallpapers, particularly the car category, which just so happens to be the one which interests my dad. I wanted to send him the latest ones. Unfortunately, I'd completely forgotten to do that at the time, so they were all now amongst my other 50,000+ wallpapers. OOPS. Luckily, I was able to programmatically get out of this pickle, as usual, because the shell is the best thing since sliced bread; hell, it's better than that!
I took a quick, semi-pureshell approach (in retrospect, using more non-shell utilities here would've been preferable, for performance reasons) to this task, running this command:
Code: Select all
7zr a $HOME/Desktop/NewCars.7z `while read -a X; do [ "${X[0]}" == PAGE: ] || find -name "${X[1]}" -ctime -2; done < $HOME/.wcdl/2017-11-24_23\:36\:54.log`
find
command to simply check the file currently processed by the read
command is both present and was created within the last two days. The code within the graves (command substitution) results in each one of the files finally being sent to STDOUT, which is then read by 7zr, allowing it to know which files I wish to create an archive with.There are definitely at least a few ways to improve this, but I was just doing this on-the-fly. Creating a find process for each file is really, really bad practice, but this being a one-hit thing, it worked fine for me. Were I to optimize it, I'd either ditch the
while read
approach and introduce grep
, drastically speeding things up, or I would use a for loop to create the correct -name FILE
arguments and expressions, in order to have find do most of the leg work.