...making Linux just a little more fun!

2-Cent Tips

2-cent Tip: Wrapping a script in a timeout shell

Ben Okopnik [ben at linuxgazette.net]


Thu, 4 Jun 2009 08:22:35 -0500

----- Forwarded message from Allan Peda <tl082@yahoo.com> -----

From: Allan Peda <tl082@yahoo.com>
To: tag@lists.linuxgazette.net
Sent: Wednesday, May 20, 2009 11:34:27 AM
Subject: Two Cent tip
I have written previously on other topics for LG, and then IBM, but it's been a while, and I'd like to first share this without creating a full article (though I'd consider one).

This is a bit long for a two cent tip, but I wanted to share a solution I came up with for long running processes that sometimes hang for an indefinite period of time. The solution I envisioned was to launch the process with a specified timeout period, so instead of running the problematic script directly, I would "wrap" it within a timeout shell function, which is no-coincidentally called "timeout". This script could signal reluctant processes that their time is up, allowing the calling procedure to catch an OS error, and respond appropriately.

Say the process that sometimes hung was called "long_data_load"; instead of running it directly from the command line (or a calling script), I would call it using the function defined below.

The unwrapped program might be:

long_data_load arg_one arg_two .... etc

which, for a timeout limit of 10 minutes, this would then become:

timeout 10 long_data_load arg_one arg_two .... etc

So, in the example above, if the script failed to complete within ten minutes, it would instead be killed (using a hard SIGKILL), and an error would be retuned. I have been using this on a production system for two months, and it has turned out to be very useful in re-attempting network intensive procedures that sometimes seem never to complete. Source code follows:

#!/bin/bash
#
# Allan Peda
# April 17, 2009
#
# function to call a long running script with a
# user set timeout period
# Script must have the executable bit set
#
 
# Note that "at" rounds down to the nearest minute
# best to use use full path
function timeout {
   if [[ ${1//[^[:digit:]]} != ${1} ]]; then
      echo "First argument of this function is timeout in minutes." >&2
      return 1
   fi
   declare -i timeout_minutes=${1:-1}
   shift
   # sanity check, can this be run at all?
   if [ ! -x $1 ]; then
      echo "Error: attempt to locate background executable failed." >&2
      return 2
   fi
   "$@" &
   declare -i bckrnd_pid=$!
   declare -i jobspec=$(echo kill -9 $bckrnd_pid |\
                        at now + $timeout_minutes minutes 2>&1 |\
                        perl -ne 's/\D+(\d+)\b.+/$1/ and print')
   # echo kill -9 $bckrnd_pid | at now + $timeout_minutes minutes
   # echo "will kill -9 $bckrnd_pid after $timeout_minutes minutes" >&2
   wait $bckrnd_pid
   declare -i rc=$?
   # cleanup unused batch job
   atrm $jobspec
   return $rc
}
 
# test case:
# ask child to sleep for 163 seconds
# putting process into the background, the reattaching
# but kill it after 2 minutes, unless it returns
# before then
 
# timeout 2 /bin/sleep 163
# echo "returned $? after $SECONDS seconds."

----- End forwarded message -----

[ ... ]

[ Thread continues here (1 message/3.45kB) ]


2-cent Tip - Poor Man's Computer Books

Ben Okopnik [ben at linuxgazette.net]


Thu, 4 Jun 2009 08:10:13 -0500

----- Forwarded message from Paul Sands <paul.sands123@yahoo.co.uk> -----

Date: Wed, 20 May 2009 14:43:43 +0000 (GMT)
From: Paul Sands <paul.sands123@yahoo.co.uk>
Subject: 2-cent Tip - Poor Man's Computer Books
To: editor@linuxgazette.net
If, like me, you can't really afford expensive computer books, find a book in your bookshop with good examples, download the example code and work through the examples. Use a reference such as the W3C CSS technical recommendation. My favourite is Sitepoint's CSS anthology

----- End forwarded message -----

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * http://LinuxGazette.NET *

[ Thread continues here (3 messages/2.59kB) ]


2-cent Tip: Checking the amount of swapped out memory owned by a process

Mulyadi Santosa [mulyadi.santosa at gmail.com]


Sat, 6 Jun 2009 01:15:58 +0700

Hi all

Recent Linux kernel versions allow us to see how much memory owned by a process is swapped out. All you need to do is the PID of the process and grab the output of related /proc entry:

$ cat /proc/<pid of your process>/smaps | grep Swap

To easily sum up all these per-process swap output, simply use below awk script:

$ cat /proc/<pid of your process>/smaps | grep Swap | awk '{ SUM +=
$2 } END { print SUM }'
the unit is in kilobyte.

PS: This is confirmed in Fedora 9 using Linux kernel version 2.6.27.21-78.2.41.fc9.i686.

regards,

Mulyadi.

[ Thread continues here (4 messages/4.29kB) ]


2-cent Tip: ext2 fragmentation

Paul Sephton [paul at inet.co.za]


Thu, 04 Jun 2009 01:52:01 +0200

Hi, all

Just thought I'd share this 2c tip with you (now the mailing list is up - yay!).

I was reading a forum where a bunch of fellows were griping about e2fs lacking a defragmentation tool. Now, we all know that fragmentation is generally quite minimal with ext2/ext3, since the file system does some fancy stuff deciding where to write new files. The problem though, is when a file grows over time, it is quite likely going to fragment, particularly if the file system is already quite full.

There was a whole lot of griping, and lots of "hey you don't need defragging, its ext3 and looks after iteself, wait for ext4", etc. Not a lot of happy campers.

Of course, Ted Ts'o opened the can of worms by writing 'filefrag', which now lets people actually see the amount of fragmentation. If not for this, probably no-one would have been complaining in the first place!

I decided to test a little theory, based on the fact that when the file system writes a new file for which it already knows the size, it will do it's utmost to make the new file contiguous. This gives us a way of defragging files in a directory like so:

#!/bin/sh
# Retrieve a list for fragmented files, #fragments:filename
flist() {
  for i in *; do
    if [ -f $i ]; then
      ff=`filefrag $i`
      fn=`echo $ff | cut -f1 -d':'`
      fs=`echo $ff | cut -f2 -d':' | cut -f2 -d' '`
      if [ -f $fn -a $fs -gt 1 ]; then echo -e "$fs:$fn"; fi
    fi
  done
}
 
# Sort the list numeric, descending
flist | sort -n -r |
(
# for each file
  while read line; do
    fs=`echo $line | cut -f 1 -d':'`
    fn=`echo $line | cut -f 2 -d':'`
# copy the file up to 10 times, preserving permissions
    j=0;
    while [ -f $fn -a $j -lt 10 ]; do
      j=$[ $j + 1 ]
 
      TMP=$$.tmp.$j
      if ! cp -p "$fn" "$TMP"; then
        echo copy failed [$fn]
        j=10
      else
# test the new temp file's fragmentation, and if less than the
# original, move the temp file over the original
        ns=`filefrag $TMP | cut -f2 -d':' | cut -f2 -d' '`
        if [ $ns -lt $fs ]; then
          mv "$TMP" "$fn"
          fs=$ns
          if [ $ns -lt 2 ]; then j=10; fi
        fi
      fi
    done
    j=0;
# clean up temporary files
    while [ $j -lt 10 ]; do
      j=$[ $j + 1 ]
 
      TMP=$$.tmp.$j
      if [ -f $TMP ]; then
        rm $TMP
      else
        j=10
      fi
    done
  done
)
# report fragmentation
for i in *; do if [ -f $i ]; then filefrag $i; fi; done

Basically, it uses the 'filefrag' utility and 'sort' to determine which files are fragmented the most. Then, starting with the most fragmented file, it copies that file up to 10 times. If the copied file is less fragmented than the original, the copy gets moved over the original. Given ext2's continuous attempt to create new files as unfragmented, there's a good chance with this process, that you end up with a directory of completely defragmented files.

[ ... ]

[ Thread continues here (1 message/5.63kB) ]



Talkback: Discuss this article with The Answer Gang

Copyright © 2009, . Released under the Open Publication License unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 164 of Linux Gazette, July 2009

Tux