Should I worry about SSD/HDD wear?

About writing shell scripts and making the most of your shell
Forum rules
Topics in this forum are automatically closed 6 months after creation.
eddie3000
Level 3
Level 3
Posts: 136
Joined: Mon Jun 24, 2013 2:11 pm

Should I worry about SSD/HDD wear?

Post by eddie3000 »

I have a script I wrote that runs in the background on startup. My scripting abilities are very limited, but I'm slowly learning.

My script checks the contents of a folder every 5 seconds using "ls /folder| wc -l", and if it has anything in it, it then uploads the files to a remote server one at a time until the folder is empty. I need this process to be pretty immediate, that's why I'm not using cron and running a script with an infinite loop with a 5 second sleep in between. It's probably not the best way for doing things, but so far it works and hasn't failed yet after a couple of weeks.

But I'm just worried about checking the folder every 5 seconds. Should I worry? I'm pretty sure the operating system does a lot of reading and writing without me knowing, but I don't know.

Thanks. Cheers!
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 1 time in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
User avatar
Pierre
Level 21
Level 21
Posts: 13192
Joined: Fri Sep 05, 2008 5:33 am
Location: Perth, AU.

Re: Should I worry about SSD/HDD wear?

Post by Pierre »

it's the Reading of the drive, that is the concern.
& it applies to Both types of drives.

so, there-fore Both types of drives, have an "extra space" that is there, to allow for that wear & tear factor.
- -so yeah - - both types of drive - so have an limited life-span.

the managing of that 'life-span' is your real issue, here. . .
- - some manufacturers do have more reliable drives & thus they do charge more, as well.
- - the monitoring of your drive, is another thing that you can also watch.
in LinuxMint, that can be checked in the 'Disks' program - - Smart Data & self tests.

NB: that should be Writing - the Reading of any drive, is ok.
:mrgreen:
Image
Please edit your original post title to include [SOLVED] - when your problem is solved!
and DO LOOK at those Unanswered Topics - - you may be able to answer some!.
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

Pierre wrote: Tue Feb 19, 2019 8:07 am it's the Reading of the drive, that is the concern.
Err, well, no, it is writing that is or could be of concern. In the case of SSD, period, in the case of HDD one could theoretically posit seek wear and tear given possibility of /folder having dropped from the cache, but even that is of course not happening in practice; the directory /folder will remain cached and a read of it will never hit platters.

I.e., no, you're fine, a read never does anything detrimental.
eddie3000
Level 3
Level 3
Posts: 136
Joined: Mon Jun 24, 2013 2:11 pm

Re: Should I worry about SSD/HDD wear?

Post by eddie3000 »

Thanks. Yes, I guess so too. After chatting about this subject after lunch with my workmates, none of them are computer experts though, we all came to the conclusion that reading couldn't be that bad. In fact, if we read a bit more about this subject we wouldn't result harmed either :lol:

So it's just the writing that should be of concern then.

In an attempt to make a sort of watchdog script that checked whether the first script was running or not, in order to restart it if necessary, the first script was writing the current time to a file every now and then, and the watchdog script would determine the first script's state depending on what it read from that file. I guess this is not a good way to do things then. How could I work around that problem? So as to not write anything to disk?
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

The best answer is deploying a systemd .path service for the task. That is, create as root a file e.g. /etc/systemd/system/folder.path consisting of

Code: Select all

[Unit]
Description=Monitor /folder

[Path]
DirectoryNotEmpty=/folder

[Install]
WantedBy=multi-user.target
and an accompanying .service file /etc/systemd/system/folder.service (i.e., same name as the .path unit) consisting of e.g.

Code: Select all

[Unit]
Description=Upload contents of /folder

[Service]
ExecStart=/usr/local/bin/upload-folder.sh
with an actual /usr/local/bin/upload-folder.sh script consisting of e.g.

Code: Select all

#!/bin/bash

shopt -s nullglob

for F in /folder/*; do
	logger -t "$0" Uploading "$F"
	rm -f "$F"
done
and made executable with chmod +x /usr/local/bin/upload-folder.sh. Then enable and start the service through

Code: Select all

$ sudo systemctl enable folder.path
$ sudo systemctl start folder.path
so as to invoke the upload script every time something appears in /folder.

Everything here is of course just minimal example to be replaced by the actual uploading, but be sure to keep the shopt -s nullglob and for structure so as to gracefully deal with the upload script finding, uploading and deleting more files than it was invoked for; for, that is, a second invocation not failing if a first has already uploaded and deleted "its file".

[EDIT] Note, I edited the #!/bin/sh on the script to #!/bin/bash a few minutes after posting; the nullglob is bash specific...
User avatar
lsemmens
Level 11
Level 11
Posts: 3936
Joined: Wed Sep 10, 2014 9:07 pm
Location: Rural South Australia

Re: Should I worry about SSD/HDD wear?

Post by lsemmens »

One from completely LEFT Field here.

Is it really that critical that data get uploaded with that frequency? Could you not just map the network/cloud drive and just save to that anyway? Or set a shutdown script to back everything up on shutdown.
Fully mint Household
Out of my mind - please leave a message
eddie3000
Level 3
Level 3
Posts: 136
Joined: Mon Jun 24, 2013 2:11 pm

Re: Should I worry about SSD/HDD wear?

Post by eddie3000 »

Wow! Rene, what you just said is way too advanced for me. But I think you are right: what I really need is my script to run as a service. I might get it done someday.
Is it really that critical that data get uploaded with that frequency?
Yes, it is. I could get away with every ten or fifteen seconds. Due to the nature of my work, when work turns up it needs getting done ASAP. I need to send work files immediately as soon as they are ready for others to use. I sometimes can't stop working at all for long periods of time. But once it's over, I can be waiting for ages for more work to turn up. It's pretty unpredictable.

Other people use filezilla, or something similar. Or even worse, they use windows. But I like messing with my linux computer and I thought that by automating the process would not only make me more efficient, but would also make my job a bit less stressful. I have a bunch of folders in my bookmarks and the script doing it's thing. Of course, the script doesn't only upload files, it also checks for file types, it changes file formats if necessary, and compresses the files for faster upload, it also uses dropbox just in case the ftp server fails (it does once in a while), and I'm trying to email the file as an attachment too in case dropbox fails as well (not happened to me yet, but it's fun to try). And finaly keeps a record of everything for later consultation if necessary.

I will try and make it run as a service. It seems very reasonable. Even though I don't know how.

Thanks.
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

eddie3000 wrote: Thu Feb 21, 2019 12:23 pm Wow! Rene, what you just said is way too advanced for me.
It really isn't. The above is a straightforward description of simply creating two files in /etc/systemd/system the contents of which are given (only substitute the actual to be uploaded folder for "/folder") and the actual upload script, whch you already have in essence since you are doing this already, in /usr/local/bin.
eddie3000
Level 3
Level 3
Posts: 136
Joined: Mon Jun 24, 2013 2:11 pm

Re: Should I worry about SSD/HDD wear?

Post by eddie3000 »

what's shopt -s nullglob for?

And don't I need to define F, or is it already defined? I don't know if I'm talking nonsense... :?

Should I remove the infinite loop in my script?

Does making the script run as a service restart it if it stops or hangs?
eddie3000
Level 3
Level 3
Posts: 136
Joined: Mon Jun 24, 2013 2:11 pm

Re: Should I worry about SSD/HDD wear?

Post by eddie3000 »

Rene, I did what you said, and got no errors anywhere. But it does not seem to work. I can't see what I did wrong.
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

Do the two /etc/systemd/system/ files have the same base name? I.e., are named "somename.path" and "somename.service"? Does the .path one name the proper directory to watch? Does said directory exist? Is the script that's invoked by the .service file marked executable (i.e., the chmod +x thing)? Does that script also use the proper directory? Did you start the service with systemctl start?

[EDIT] Do note that the system depends on the upload script removing the file from the watched directory once it's done with it. I took for granted that's how you have things setup but come to think of it that might not be the case. If you are wanting to act on NEW files appearing in /folder and do not want to delete them from it when they've been uploaded we need to tweak stuff... [/EDIT]

[RE-EDIT] Rereading OP it seems I did not "take for granted"; that you in fact described as much so forget that EDIT. Means I wouldn't know what the issue could be for you though, if first paragraph is all correct. All well here. [/RE-EDIT]

Note, when you now edit the .path or .service file you will need sudo systemctl daemon-reload to have the changes "register".

The shopt -s nullglob means that the "/folder/*" in the loop expands to <nothing> if the directory is empty rather than to literal "/folder/*" (i.e., with a literal star) as it would without that option set, leading to syntax errors when the script were called while /folder were empty. That in turn is defending against the "race condition" of the upload script being invoked for one file appearing in /folder but finding more present when it in fact starts running; given that the script uploads and deletes all of them, a possible second invocation would find the directory empty (although that doesn't in fact happen for me when testing there seems no reason it couldn't so better safe than sorry).

You can forget that last paragraph; just keep the displayed structure. If you additionally keep the "logger" line then you'd be able to monitor things with for example sudo journalctl -b -f from another open terminal.

An infinite loop is not present in my example script; yes, you would remove it from any script you have. In the new situation the script is invoked each time anything appears in the watched directory. And as such, yes, there's nothing to die here (other than systemd itself): as long as the system is up it will fire. I'll also note that said watching by systemd is by the way with the help of the kernel (specifically, of inotify); that it hence does not internally continously read the directory or anything of that sort; that this is a very efficient system, with nothing written or read that need not be...
Last edited by rene on Thu Feb 21, 2019 4:30 pm, edited 1 time in total.
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

Will unfortunately have to be away until tomorrow; hope the above helps. Even more explicitly: the uploadv script is (should be) invoked whenever the watched directory transitions from being empty to being nonempty. That then means that if the directory is not expected to ever be empty we need to tweak the system.

If it doesn't work, others will be able to help as well if you post the two /etc/systemd/system files you have created and the actual upload script, redacted as little as possible.
User avatar
murray
Level 5
Level 5
Posts: 785
Joined: Tue Nov 27, 2018 4:22 pm
Location: Auckland, New Zealand

Re: Should I worry about SSD/HDD wear?

Post by murray »

rene wrote: Thu Feb 21, 2019 3:39 pm I'll also note that said watching by systemd is by the way with the help of the kernel (specifically, of inotify); that it hence does not internally continously read the directory or anything of that sort; that this is a very efficient system, with nothing written or read that need not be...
This is a great solution for this type of work process, I'll definitely have to remember it for the future (and investigate systemd in more depth).
Running Mint 19.3 Cinnamon on an Intel NUC8i5BEH with 16GB RAM and 500GB SSD
eddie3000
Level 3
Level 3
Posts: 136
Joined: Mon Jun 24, 2013 2:11 pm

Re: Should I worry about SSD/HDD wear?

Post by eddie3000 »

Fisrt I created the new files with:

Code: Select all

sudo touch /etc/systemd/system/FTP.path
sudo touch /etc/systemd/system/FTP.path
sudo touch /usr/local/bin/FTP.sh
Then I edit and save them with:

Code: Select all

sudo xed /etc/systemd/system/FTP.service
sudo xed /etc/systemd/system/FTP.path
sudo xed /usr/local/bin/FTP.sh
I then do:

Code: Select all

sudo chmod +x /usr/local/bin/FTP.sh
Here are the contents of my /etc/systemd/system/FTP.path file:

Code: Select all

[Unit]
Description=Monitor /FTPFOLDER

[Path]
DirectoryNotEmpty=/FTPFOLDER

[Install]
WantedBy=multi-user.target
And here's /etc/systemd/system/FTP.service

Code: Select all

[Unit]
Description=Upload contents of /FTPFOLDER

[Service]
ExecStart=/usr/local/bin/FTP.sh
Here's /usr/local/bin/FTP.sh

Code: Select all

#!/bin/bash

shopt -s nullglob

for F in /folder/*; do
	

    countA=$(ls /FTPFOLDER| wc -l)
    
    while [ $countA -gt 0 ]
    do
        case "$(curl -s --max-time 3 -I http://google.com | sed 's/^[^ ]*  *\([0-9]\).*/\1/; 1q')" in
        [23]) echo "HTTP connectivity is up"
        mv -v /Dropbox/temp/* /Dropbox/junk/ 
        FILEA=$(ls /Dropbox/FTPFOLDER| sort -n | head -1)
        fileA="${FILEA%%.*}"
        extA="${FILEA##*.}"
        FTPFOLDER="/Dropbox/FTPFOLDER/"
        tempA="/Dropbox/temp/"
        envA="/Dropbox/enviados/FTPFOLDER/"
        dropA="/Dropbox/Dropbox/"
        aA="$FTPFOLDER$FILEA"
        BA="$tempA$FILEA"
        bA="$tempA$fileA"
        cA="$envA$fileA"
        CA="$envA$FILEA"
        dA="$dropA$fileA"
        mv "$aA" "$BA"
        sox "$BA" "$bA .wav"
        mv "$BA" "$CA"
        sox "$bA .wav" -r 48000 "$bA  .wav"
        lame -h -b 192 "$bA  .wav" "$bA.mp3"
        curl -T "$bA.mp3" ftp://user:password@172.132.123.2/FTP/EXT_FTPFOLDER/
        cp "$bA.mp3" "$dA.mp3"
        cp "$bA.mp3" "$cA.mp3"
        mv "$bA .wav" "$cA .wav"
        mv "$bA  .wav" "$cA  .wav"
        countA=$(ls /Dropbox/FTPFOLDER| wc -l);;

        5) echo "The web proxy won't let us through";;

        *) echo "The network is down or very slow";;
        esac

        echo "$countA files awaiting upload to EXT_FTPFOLDER."
        
    done


done
I finally do:

Code: Select all

sudo systemctl enable folder.path
sudo systemctl start folder.path
It should work right away, shouldn't it?
I reboot the computer and it still appears to do nothing.
If I manually run the script FTP.sh from the terminal with sudo it works.
What is wrong?


NOTE: In the script, don't let the two dropbox folders puzzle you. By mistake there are two dropbox folders, one inside another. The REAL dropbox folder is /Dropbox/Dropbox. I just happened to decide to dump all the other folders in the parent dropbox folder, and never got around to renaming it to something else 'cause I just couldn't bother. Lazy me!
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

Ah. That "/folder" in the loop of /usr/local/bin/upload-folder.sh was just yet another instance of "/folder" that was to be replaced with the actual watched folder, /FTPFOLDER in your case.

What for F in in /folder/*; do ...; done does is loop through the files in the directory /folder/, setting the variable F to each of them in turn and executing the loop body. If you have no /folder directory it is together with the nullglob not surprising that the loop was executed a total of 0 times for you.

I minimally edited your FTP.sh script to fit the structure. Also changed the echo's to logger calls; as mentioned, you can monitor the system log with for example sudo journalctl -b -f from another terminal to see them appear; you'll also see systemd logging the script invocations there.

Code: Select all

#!/bin/bash

shopt -s nullglob

for FILEA in /FTPFOLDER/*; do
	case "$(curl -s --max-time 3 -I http://google.com | sed 's/^[^ ]*  *\([0-9]\).*/\1/; 1q')" in
	[23])	logger -t "$0" "HTTP connectivity is up"
		mv -v /Dropbox/temp/* /Dropbox/junk/ 
		fileA="${FILEA%%.*}"
		extA="${FILEA##*.}"
		FTPFOLDER="/Dropbox/FTPFOLDER/"
		tempA="/Dropbox/temp/"
		envA="/Dropbox/enviados/FTPFOLDER/"
		dropA="/Dropbox/Dropbox/"
		aA="$FTPFOLDER$FILEA"
		BA="$tempA$FILEA"
		bA="$tempA$fileA"
		cA="$envA$fileA"
		CA="$envA$FILEA"
		dA="$dropA$fileA"
		mv "$aA" "$BA"
		sox "$BA" "$bA .wav"
		mv "$BA" "$CA"
		sox "$bA .wav" -r 48000 "$bA  .wav"
		lame -h -b 192 "$bA  .wav" "$bA.mp3"
		curl -T "$bA.mp3" ftp://user:password@172.132.123.2/FTP/EXT_FTPFOLDER/
		cp "$bA.mp3" "$dA.mp3"
		cp "$bA.mp3" "$cA.mp3"
		mv "$bA .wav" "$cA .wav"
		mv "$bA  .wav" "$cA  .wav"
		;;
	5)	logger -t "$0" "The web proxy won't let us through"
		;;
	*)	logger -t "$0" "The network is down or very slow"
		;;
	esac
done
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

Only rereading this now while paying some attention (the use of variables sort of invites to skip while reading) and the script never seems to remove $FILEA so must say I'm wondering how this ever worked. Note in any case once more that you are are supposed to remove an uploaded file from the watched folder once done with it.
eddie3000
Level 3
Level 3
Posts: 136
Joined: Mon Jun 24, 2013 2:11 pm

Re: Should I worry about SSD/HDD wear?

Post by eddie3000 »

Sorry, I'm quite busy and haven't had a chance to carry on with the testing. I hope to have some time on Friday.

Do not take my script too seriously, it needs a lot of tidying up. Ideas are welcome. Of course, this thread is already going way off-topic, but that's fine with me.

I do have some questions regarding systemd.

If I were to drag five files into the upload folder, does systemd run my script 5 times?

My script picks the files one at a time. If it were to take a long time with the first file due to slow internet or just because it's a big file, would systemd run the script again because there are still four remaining files in the upload folder? How often does systemd check the upload folder?

Thanks for all your support. Seeya soon!
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

As mentioned, the .path service as written triggers when the directory transitions from being empty to being nonempty. Not missing files is the reason for the for loop. As also indicated, systemd does not "check" the upload folder: it sets a kernel-managed watch on the folder and is notified directly by the kernel when something appears in it without needing to explicitly check.

When I looked at your script closer it seemed that your upload folder is in fact /Dropbox/FTPFOLDER rather than the /FTPFOLDER you indicated in the .path service. I rewrote the unit files and script using the assumption that indeed /Dropbox/FTPFOLDER is the directory to watch for incoming files. You keep the temporary .wav files but this doesn't, and it's also light on error handling; edit as desired.

Works For Me. Use sudo systemctl daemon-reload when you edit/replace the unit files and note once again you can monitor things with journalctl -b -f from a terminal.

/etc/systemd/system/FTP.path

Code: Select all

[Unit]
Description=Monitor /Dropbox/FTPFOLDER

[Path]
DirectoryNotEmpty=/Dropbox/FTPFOLDER

[Install]
WantedBy=multi-user.target
/etc/systemd/system/FTP.service

Code: Select all

[Unit]
Description=Upload contents of /Dropbox/FTPFOLDER

[Service]
ExecStart=/usr/local/bin/FTP.sh
/usr/local/bin/FTP.sh

Code: Select all

#!/bin/bash

shopt -s nullglob

ROOT=/Dropbox

case $(curl -s --max-time 3 -I http://google.com | awk '/^HTTP/ { print $2 }') in
2*)	;&
3*) 	for FILE in "$ROOT/FTPFOLDER"/*; do
		NAME="${FILE##*/}"
		NAME="${NAME%.*}.mp3"
		echo Uploading $NAME
		TEMP=$(mktemp)
		sox "$FILE" -t wav -r 48000 - | lame -h -b 192 - "$TEMP"
		curl -T "$TEMP" "ftp://user:password@172.132.123.2/FTP/EXT_FTPFOLDER/$NAME"
		cp "$TEMP" "$ROOT/Dropbox/$NAME"
		mv "$TEMP" "$ROOT/Enviados/FTPFOLDER/$NAME"
		mv "$FILE" "$ROOT/Enviados/FTPFOLDER/"
	done
	;;
5*)	echo "The web proxy won't let us through"
	;;
*)	echo "The network is down or very slow"
	;;
esac
eddie3000
Level 3
Level 3
Posts: 136
Joined: Mon Jun 24, 2013 2:11 pm

Re: Should I worry about SSD/HDD wear?

Post by eddie3000 »

Rene, it finally works.

I can't thank you enough for your help.

Your script is much tidier than mine, I have rewritten mine to look more like yours. I had a bit of trouble with the variables (as usual with me), but once that was sorted everything worked fine. Everything is much much tidier now.

Thanks for showing me the mktemp command, very useful indeed.

There's one little problem I've encountered. When I cp the final file to /Dropbox/Dropbox/$final, it is owned by root. Dropbox will not sync that file because it gets "ACCESS DENIED". I fixed it with "chmod user /Dropbox/Dropbox/$final" in the script, user being the actual user name. If I use $USER, I get root. How can I get the name of the logged user from a script running as root? I haven't googled enough about that yet.

Strangely, Dropbox will not sync the files that belong to root, but I can delete the files from nemo in my user account. Isn't that weird?

Thanks again.
rene
Level 20
Level 20
Posts: 12240
Joined: Sun Mar 27, 2016 6:58 pm

Re: Should I worry about SSD/HDD wear?

Post by rene »

eddie3000 wrote: Fri Mar 01, 2019 7:52 am There's one little problem I've encountered. When I cp the final file to /Dropbox/Dropbox/$final, it is owned by root. Dropbox will not sync that file because it gets "ACCESS DENIED". I fixed it with "chmod user /Dropbox/Dropbox/$final" in the script, user being the actual user name. If I use $USER, I get root. How can I get the name of the logged user from a script running as root?
Ah. I don't in fact use Dropbox but I take it then it's running as your user rather than system-wide / as root. Yes, this is in that case actually a consequence of the mktemp, it creating u+rw files, i.e., with read/write permission for the owning user (root, in its case) and no permissions whatsoever for anyone else. Standard is u+rw,go+r, i.e., read/write for user, read for anyone else (0644 in common octal permission terminology). The best way to solve it is simply by inserting chmod go+r "$TEMP" either directly before or directly after the sox line in my above version of the script.

If you would however like the mp3 user-owned then, yes, chown (as I take it you meant in the above quote, rather than chmod) will do as well. Now that this thing has transitioned to a system-wide service the concept of "logged user" is a bit ambiguous but I take from the description that the directory /Dropbox/Dropbox itself is in fact user-owned. I'd take that as the data source: chown --reference="$ROOT/Dropbox" "$ROOT/Dropbox/$NAME".
eddie3000 wrote: Fri Mar 01, 2019 7:52 am Strangely, Dropbox will not sync the files that belong to root, but I can delete the files from nemo in my user account. Isn't that weird?
Not so much. Dropbox needs to read the mp3 itself and is hence limited by the read permissions as set on the mp3 itself. Deleting it is however not a read or write of the file itself but a write to the directory it resides in, and as such, it's said directory's write permission that governs that specific ability. And that's also to say then that indeed this part tells us that the directory Dropbox/Dropbox has user write-permission, i.e., is very likely user-owned, i.e., can be a reference for the manual chown you may want. But the first solution, the chmod go+r, I would find to be the better variant.
Locked

Return to “Scripts & Bash”