Script for emailing DFS Replication Health Reports

Posted on Updated on

DFS replication is a great way to synchronize data for DR purposes, but there is no built in scheduled reporting mechanism.  Well here is a script I wrote that runs the dfrsadmin reports and attaches each report, as well as sends links to each report for review.  Very helpful considering I was logging in each day and running commands to check the backlog.  Now, I can just open these reports every day and all the information is right there.  Once the script is there, simply create a scheduled task to run this script at whatever time interval is needed to receive these reports.


@echo off

set REPORTS=\\*******\*******
set FROM="DFS Replication <******@*******>"
set TO="******* <*******@*******>"


:: This cleans up old reports to conserve space.
FORFILES /p E:\****************** /m *.* /d -30 /c "cmd /c del @FILE"

:: adds the date/time to the report name and to the title of the email
for /F "tokens=1,2,3,4 delims=/, " %%i in (%CURRDATE%) Do SET DDMMYYYY=%%j-%%k-%%l
for /F "tokens=1,2,3 delims=:, " %%i in (%CURRTIME%) Do Set HHMM=%%i%%j%%k

set RG_Report=%REPORTS%\folder1-%DDMMYYYY%-%HHMM%.html

:: define the report options as specified in the dfrsadmin.exe utility
dfsradmin health new /rgname:folder1 /refmemname:server1 /ReportName:%RG_Report% /fscount:true

:: overwrite the report file names to temp
echo folder1 %RG_Report% > %TEMP%\healthMessageBodyRG.txt

:: include the links to the reports up in the body of the message
echo folder1 %RG_Report% > %TEMP%\healthMessageBody.txt

:: format the individual report to be sent as an attachment
for /F "tokens=2 delims= " %%i in (%TEMP%\healthMessageBodyRG1.txt) Do SET FILESRG=%%i

:: email the links as well as the attachments using sendEmail.exe
sendEmail.exe -f %FROM% -t %TO% -u "DFS Replication Health Reports %DDMMYYYY%" -o message-file=%TEMP%\healthMessageBody.txt -s -a %FILESRG%

mRemote – one stop shop for server mgmt

Posted on Updated on

If you haven’t used mRemote, I strongly recommend you do.  This is a great application that is a “one stop shop” remote server and device management tool.  You can manage anything and everything which uses these protocols  (RDP, VNC, ICA, SSH, Telnet, RAW, Rlogin and HTTP/S).  So basically any UNIX, Linux, Windows server, any network switch or device that you are remotely administering – all through one application.  The connections can be easily duplicated, and sorted as well as folders for organization.

All in all, a great tool for managing a mixed environment of OS and devices!

Here’s a link to the product overview:

Target To Start Selling The iPad On October 3rd, Discounts Available (via TechCrunch)

Posted on

Target To Start Selling The iPad On October 3rd, Discounts Available Target will soon be able to fulfill all your iPad needs. October 3rd is the date that the Apple iPad should hit Target stores throughout the US. Best of all, Target credit card holders can get the iPad for a bit cheaper. Target's retail plan includes all six models of the iPad along with a full range of accessories and add-ons. The retailer will honor the suggested manufacturer price starting at $499 for the 16GB WiFi version. … Read More

via TechCrunch

BASH script to Email list of “new” files found

Posted on Updated on

This is a quick-and-dirty script that I made to solve a real world scenario.  I had to find a way to notify a data integration group of any new files that an outside vendor had sent up to our secure FTP SUSE Linux server.  These were batched up into many, many files, all starting with the same few characters.  This made it fairly easy to add a wildcard search, but the other parts deal with the fact that only NEW files needed to be identified, not any existing files that were already processed internally by the data integration group.



# write an "old" file listing if first time run

if [ -f $OLD ]; then
 ls -la /path/to/filetype/filetype* | awk '{print $9}' > $OLD

# take a snapshot of the directory now, possibly capturing new files

ls -la /path/to/filetype/filetype* | awk '{print $9}' > $NEW
diff "$OLD" "$NEW" >> "$DIFF"

# new file listing now becomes "old" for next run

cat $NEW > $OLD

# if new files are found, log and send out a message

if [ -s $DIFF ]; then
cat <<EOF | /usr/sbin/sendmail -t
Subject: New Files Found
`cat $DIFF`

VMware Fault Tolerance (FT) vs. Microsoft Clustering (MSCS)

Posted on Updated on

Having had much experience with various clustering technologies, I thought I’d pass on some experience with what I have found to be a great solution – that being VMware’s Fault Tolerance solution.  This is a review on how FT stacks up against traditional MSCS, whether it be with SQL servers, simple file servers, or any other custom application server that demands high availability. Obviously, for these purposes, we are talking about a Windows-based operating system. VMware FT will work with any and every operating system supported by VMware, whether it be RedHat Linux, Solaris, etc. (and usually even the ones that they don’t). However, this is more of a case study on the pro’s of FT in a MS Server that demands high-availability (HA).

Now, there are all sorts of combinations of MSCS/VMware clustering solutions that can be mixed and matched together, but this is focusing on TRULY Fault-Tolerant solutions. In other words, downtime is ideally measured in milliseconds, not seconds or minutes.

That being said, it’s key to point out that the virtualization tide is essentially a tidal wave by now. I’ve been working with today’s modernized versions of virtualization (mostly with VMware) very early in the game (12/04), and I’ve obviously seen firsthand how quickly the benefits are shown, and how quickly environments become virtualized.

So in a mixed environment, with multiple VMware ESX servers, and some physical Windows SQL/File/App servers that are truly business critical, why use VMware FT? Why not just stick with the old MSCS standby? Well, here are the key reasons:

1.) Ease of configuration. Now, everyone who has setup a MSCS cluster can attest to the fact that it can be very finicky. Also failovers are not always, how shall I say, as quick as they should be. If resource groups don’t have the proper dependencies, if the internal heartbeat isn’t quite working, failover will not work.
2.) Leverages existing hardware. The assumption here is that in a mixed environment, ESX servers are available – and presumably using shared storage, and have HA enabled. So, you aren’t using hardware SOLELY dedicated to clustering – it’s already purchased and in use.
3.) Chance of differing configuration over time is eliminated. This is absolutely impossible with FT. With multiple people possibly having access to a traditional two-node MSCS cluster, there is potential for rogue changes.