Secure wiping of tapes on Linux

At one client site, we've recently moved from tape to disk for our offline backup storage medium.  We debated what to do with the old tape loader and tapes, and concluded that we would never go back and so should get rid of them entirely.  I was given the task of working out how to securely wipe the old tapes.

My first choice was to try 'wipe', the usual Linux utility for wiping files and hard disks on a live Linux system.  (I generally use Darik's Boot and Nuke, a.k.a. DBAN, as an offline wiper.)  To my surprise, wipe did not function at all with /dev/st0 as its target device.  After some brief searching of Google, i concluded that i was not likely to find a pre-existing utility.  I ended up hacking up a quick little script:

#!/bin/sh
DEVICE=/dev/st0
for i in $(seq 1 7) ; do
        mtx load $i
        for j in $(seq 1 4); do
                dd if=/dev/urandom of=random.block bs=1k count=1
                for k in $(seq 1 10240); do
                        cat random.block
                done > big.random.block
                while true; do cat big.random.block; done | dd of=$DEVICE bs=1024k
        done
        mt -f $DEVICE erase
        mtx unload $i
done

A rundown of some salient points:

  • We have a 7-slot SCSI tape autoloader, hence the $(seq 1 7); they're loaded and unloaded via mtx.
  • I wanted multi-pass wiping using random data, but didn't want to keep pulling more and more pseudo-random data from /dev/urandom.  The system i was using had only 30 bytes of entropy available in /dev/random, so my data block was probably not highly-random anyway.
  • I started with using just the same 1KB file of random data, but found that the bash loop to cat the file was a performance bottleneck, so the innermost loop is used to repeat the same random block 10K times to make the wipe more efficient.
  • At the end of 4 passes of random data, we do a normal erase.  I assume this writes zeroes to the tape, but i'm not sure.  (The script is still working through our 7 tapes at about 25 GB/hour, so it will be several days before it's finished.)

Hope this helps someone.  I was surprised that i couldn't find a better alternative easily.  Suggestions for improvement gratefully accepted.

Related posts