Scripts
Setting a Cron Job to Run at the End of the Month
This is a method to be able to run a cron job on the last day of month, as this is not easily supported. This method will support any month (from 28 days to 31 days), although you could run this every day with no consequence. It utilizes an expression that takes the current day (plus one day forward in the future) as a number minus the current day as a number. If the value is less than or greater than 0, then the command will run.
For example, if today is the 31st of the month, the expression will add one day in the future, so the first number would be “1”, the second number would be “31” – therefore the value is -le 0, so the command will run.
0 23 28-31 * * [ $(expr $(date +\%d -d '1 days') - $(date +\%d) ) -le 0 ] && echo true
BASH redirection reference
#!/bin/bash # redirect stdout to file cmd > file # redirect stderr to file cmd 2> file # redirect both stderr and stdout to file cmd >& file # pipe cmd1's stdout to cmd2's stdin cmd1 | cmd2 # pipe cmd1's stdout and stderr to cmd2's stdin cmd1 2>&1 | cmd2 # print cmd1's stdout to screen and also write to file cmd1 | tee file # print stdout and stderr to screen while writing to file cmd1 2>&1 | tee file
Python – writing to a file on a fileystem to test I/O
This is very useful Python script that will allow you to test writing to a file on a filesytem. This can be used to verify that their are no I/O read or write failures during an extend or migration of a filesystem.
#!/usr/bin/python import time testfile = "/dbbackup-01/testfile.txt" for i in range(100): print i fo = open(testfile, "a") fo.write('test write while the filesystem is being modified\n') print fo.name fo.close() time.sleep(0.2)
Backup and Archive Nagios
This is a shell script that can be used to backup a Nagios configuration (or any group of files/directories in Linux or UNIX) locally, and to sync the backups to a remote location. This script works perfectly when there are two different Nagios instances in different locations, and this script can be used on both servers to backup and archive, and then rsync the files to the remote side – just by changing the three variables at the top of the script. Logging and emailing results of each job can be added in to the script as well.
A best practice that I implemented is to use SSH shared keys for the rsync. Use a non-root account and send the traffic along a trusted VLAN. This allows for the SSH to not prompt for a password every time the script is run, which should be automated through a cron job.
#!/bin/bash #################################### ###### Local System Variables ###### #################################### NAGIOS=usr/local/nagios LOCAL=/local/directory/path REMOTE=user@server:/local/directory/path #################################### ####### DO NOT CHANGE BELOW ######## #################################### BACKUP=$LOCAL/nagios-backup.tgz DATE=`date +"%F-%T"` export LOCAL export REMOTE ### check to see if current backup file exists ### if [ -f $BACKUP ] then echo "Backup file exists." mv $BACKUP $BACKUP-$DATE tar czf $BACKUP -C / $NAGIOS else echo "Backup file does not exist...creating." tar czf $BACKUP -C / $NAGIOS exit fi ### remove files older than seven days ### find $LOCAL -type f -mtime +7 -exec rm {} \; ### change the permissions of the file to the backups user ### chown -R backups:backups $LOCAL ### change to backups user to run the rsync script ### su backups -c /home/backups/rsync-files.sh ### rsync the files to the DR backup site ### rsync -avz --delete $LOCAL/ $REMOTE
Script for emailing DFS Replication Health Reports
DFS replication is a great way to synchronize data for DR purposes, but there is no built in scheduled reporting mechanism. Well here is a script I wrote that runs the dfrsadmin reports and attaches each report, as well as sends links to each report for review. Very helpful considering I was logging in each day and running commands to check the backlog. Now, I can just open these reports every day and all the information is right there. Once the script is there, simply create a scheduled task to run this script at whatever time interval is needed to receive these reports.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@echo off set CURRDATE=%TEMP%\CURRDATE.TMP set CURRTIME=%TEMP%\CURRTIME.TMP set REPORTS=\\*******\******* set FROM="DFS Replication <******@*******>" set TO="******* <*******@*******>" DATE /T > %CURRDATE% TIME /T > %CURRTIME% :: This cleans up old reports to conserve space. FORFILES /p E:\****************** /m *.* /d -30 /c "cmd /c del @FILE" :: adds the date/time to the report name and to the title of the email for /F "tokens=1,2,3,4 delims=/, " %%i in (%CURRDATE%) Do SET DDMMYYYY=%%j-%%k-%%l for /F "tokens=1,2,3 delims=:, " %%i in (%CURRTIME%) Do Set HHMM=%%i%%j%%k set RG_Report=%REPORTS%\folder1-%DDMMYYYY%-%HHMM%.html :: define the report options as specified in the dfrsadmin.exe utility dfsradmin health new /rgname:folder1 /refmemname:server1 /ReportName:%RG_Report% /fscount:true :: overwrite the report file names to temp echo folder1 %RG_Report% > %TEMP%\healthMessageBodyRG.txt :: include the links to the reports up in the body of the message echo folder1 %RG_Report% > %TEMP%\healthMessageBody.txt :: format the individual report to be sent as an attachment for /F "tokens=2 delims= " %%i in (%TEMP%\healthMessageBodyRG1.txt) Do SET FILESRG=%%i :: email the links as well as the attachments using sendEmail.exe sendEmail.exe -f %FROM% -t %TO% -u "DFS Replication Health Reports %DDMMYYYY%" -o message-file=%TEMP%\healthMessageBody.txt -s smtpserver.domain.com -a %FILESRG%
BASH script to Email list of “new” files found
This is a quick-and-dirty script that I made to solve a real world scenario. I had to find a way to notify a data integration group of any new files that an outside vendor had sent up to our secure FTP SUSE Linux server. These were batched up into many, many files, all starting with the same few characters. This made it fairly easy to add a wildcard search, but the other parts deal with the fact that only NEW files needed to be identified, not any existing files that were already processed internally by the data integration group.
#!/bin/bash OLD=/root/filecheck/old.log NEW=/root/filecheck/new.log DIFF=/root/filecheck/diff.log RCPT=user@user.com # write an "old" file listing if first time run if [ -f $OLD ]; then ls -la /path/to/filetype/filetype* | awk '{print $9}' > $OLD fi # take a snapshot of the directory now, possibly capturing new files ls -la /path/to/filetype/filetype* | awk '{print $9}' > $NEW diff "$OLD" "$NEW" >> "$DIFF" # new file listing now becomes "old" for next run cat $NEW > $OLD # if new files are found, log and send out a message if [ -s $DIFF ]; then cat <<EOF | /usr/sbin/sendmail -t To: $RCPT From: sftp@sftp-server.company.net Subject: New Files Found `cat $DIFF` . EOF fi