Linux

Setting a Cron Job to Run at the End of the Month

Posted on Updated on

This is a method to be able to run a cron job on the last day of month, as this is not easily supported.  This method will support any month (from 28 days to 31 days), although you could run this every day with no consequence.  It utilizes an expression that takes the current day (plus one day forward in the future) as a number minus the current day as a number.  If the value is less than or greater than 0, then the command will run.

For example, if today is the 31st of the month, the expression will add one day in the future, so the first number would be “1”, the second number would be “31” – therefore the value is -le 0, so the command will run.

0 23 28-31 * * [ $(expr $(date +\%d -d '1 days') - $(date +\%d) ) -le 0 ] && echo true
Advertisement

comm – utility for easily comparing files (better than diff)

Posted on Updated on

Comm is a very useful GNU program that is better than diff for finding unique lines between two given files. A good example would be one file with a big list of IP addresses, with another file that has a small list of IP addresses. In this example, the IP addresses in the small list are presumed to be present in the big list, but short of doing an advanced script using find, this is a quick and clean way to find the duplicates.

With no options, produce three-column output. Column one contains lines unique to file1, column two contains lines unique to file2, and column three contains lines common to both files.

-1 suppress lines unique to file1
-2 suppress lines unique to file2
-3 suppress lines that appear in both files

NOTE: The files MUST be sorted first, or the results will not be accurate.

##################################

contents of file1:
192.168.1.0
192.168.1.1
192.168.1.4
192.168.1.5
192.168.1.6
192.168.1.7
201.44.32.4
201.44.32.5
201.44.32.8
201.44.32.9

contents of file2:
192.168.1.1
192.168.1.5
201.44.32.5
201.44.32.8

#################################

Example (to find only the unique IP address in file1)

comm -3 file1 file2

# output
192.168.1.0
192.168.1.4
192.168.1.6
192.168.1.7
201.44.32.4
201.44.32.9

BASH redirection reference

Posted on Updated on

#!/bin/bash

# redirect stdout to file
cmd > file
# redirect stderr to file
cmd 2> file
# redirect both stderr and stdout to file
cmd >& file
# pipe cmd1's stdout to cmd2's stdin
cmd1 | cmd2
# pipe cmd1's stdout and stderr to cmd2's stdin
cmd1 2>&1 | cmd2
# print cmd1's stdout to screen and also write to file
cmd1 | tee file
# print stdout and stderr to screen while writing to file
cmd1 2>&1 | tee file

Python – writing to a file on a fileystem to test I/O

Posted on Updated on

This is very useful Python script that will allow you to test writing to a file on a filesytem. This can be used to verify that their are no I/O read or write failures during an extend or migration of a filesystem.

#!/usr/bin/python

import time
testfile = "/dbbackup-01/testfile.txt"

for i in range(100):
    print i
    fo = open(testfile, "a")
    fo.write('test write while the filesystem is being modified\n')
    print fo.name
    fo.close()
    time.sleep(0.2)

Joining RedHat Servers to Active Directory

Posted on Updated on

Joining Redhat servers to AD domain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

yum install samba3x
yum install winbind

vim /etc/nsswitch

=================================
passwd: files winbind
shadow: files winbind
group: files winbind
=================================

vim /etc/samba/smb.conf

===================================
workgroup = DOMAINNAME
password server = x.x.x.x
realm = DOMAINNAME.COM
security = ads
idmap uid = 10000-20000
idmap gid = 10000-20000
winbind separator = +
template homedir = /home/%D/%U
template shell = /bin/bash
winbind use default domain = false
winbind offline logon = false
====================================

net ads join -U user@domainname.com

Backup and Archive Nagios

Posted on Updated on

This is a shell script that can be used to backup a Nagios configuration (or any group of files/directories in Linux or UNIX) locally, and to sync the backups to a remote location. This script works perfectly when there are two different Nagios instances in different locations, and this script can be used on both servers to backup and archive, and then rsync the files to the remote side – just by changing the three variables at the top of the script. Logging and emailing results of each job can be added in to the script as well.

A best practice that I implemented is to use SSH shared keys for the rsync. Use a non-root account and send the traffic along a trusted VLAN. This allows for the SSH to not prompt for a password every time the script is run, which should be automated through a cron job.

#!/bin/bash

####################################
###### Local System Variables ######
####################################

NAGIOS=usr/local/nagios
LOCAL=/local/directory/path
REMOTE=user@server:/local/directory/path

####################################
####### DO NOT CHANGE BELOW ########
####################################

BACKUP=$LOCAL/nagios-backup.tgz
DATE=`date +"%F-%T"`

export LOCAL
export REMOTE

### check to see if current backup file exists ###
if [ -f $BACKUP ]
then
  echo "Backup file exists."
  mv $BACKUP $BACKUP-$DATE
  tar czf $BACKUP -C / $NAGIOS
else
  echo "Backup file does not exist...creating."
  tar czf $BACKUP -C / $NAGIOS
exit
fi

### remove files older than seven days ###
find $LOCAL -type f -mtime +7 -exec rm {} \;

### change the permissions of the file to the backups user ###
chown -R backups:backups $LOCAL

### change to backups user to run the rsync script ###
su backups -c /home/backups/rsync-files.sh

### rsync the files to the DR backup site ###
rsync -avz --delete $LOCAL/ $REMOTE

BASH script to Email list of “new” files found

Posted on Updated on

This is a quick-and-dirty script that I made to solve a real world scenario.  I had to find a way to notify a data integration group of any new files that an outside vendor had sent up to our secure FTP SUSE Linux server.  These were batched up into many, many files, all starting with the same few characters.  This made it fairly easy to add a wildcard search, but the other parts deal with the fact that only NEW files needed to be identified, not any existing files that were already processed internally by the data integration group.

#!/bin/bash

OLD=/root/filecheck/old.log
NEW=/root/filecheck/new.log
DIFF=/root/filecheck/diff.log
RCPT=user@user.com

# write an "old" file listing if first time run

if [ -f $OLD ]; then
 ls -la /path/to/filetype/filetype* | awk '{print $9}' > $OLD
fi

# take a snapshot of the directory now, possibly capturing new files

ls -la /path/to/filetype/filetype* | awk '{print $9}' > $NEW
diff "$OLD" "$NEW" >> "$DIFF"

# new file listing now becomes "old" for next run

cat $NEW > $OLD

# if new files are found, log and send out a message

if [ -s $DIFF ]; then
cat <<EOF | /usr/sbin/sendmail -t
To: $RCPT
From: sftp@sftp-server.company.net
Subject: New Files Found
`cat $DIFF`
.
EOF
fi