About Me

My photo
Working as Technical Lead in CollabNET software private limited.

Thursday 25 September, 2008

PERL script for extracting log file based on time

Below code extracts the huge log file based on time frame passed.


#!/usr/bin/perl

use strict;

my $file;
my $times = 0;
my $needed_response_time = 0;
my $start;
my $end;

sub Usage
{
my ($msg) = $_[0];
if ($msg)
{
print STDERR "$msg \n";
}
print STDERR << "EOHELP"
Usage: split_access_log_by_time

options:

-t - :: time interval for which the log is needed
-g

PERL script to combine two csv files

Below script is used to combine two csv files based on the first column. For example, it is assumed both the file has first column related to project names.



#!/usr/bin/perl -w

$FILE1 = $ARGV[0];
$FILE2 = $ARGV[1];
$PROJECTFILE = $ARGV[2];

my %File1Map = ();
my %File2Map = ();

open(FILE1) or die("Could not open $FILE1 file.");
foreach $line () {
# ($Project,$Attachments,$Announcements,$Documents,$Discussions,$Users,$ArtifactTypeCount,$Alm) = split(',',$line);
($Project,$values) = split(',',$line,2);
# my $tempval="$Attachments,$Announcements,$Documents,$Discussions,$Users,$ArtifactTypeCount,$Alm";
my $tempval = $values;
# print "$Project,$values";
$tempval =~ s/\r|\n//g;
$File1Map{"$Project"} = "$tempval";
}

close(FILE1);

open(FILE2) or die("Could not open $FILE2 file.");
foreach $subline() {
($proj,$izcount) = split(',',$subline);
$File2Map{"$proj"}="$izcount";
}

close(FILE2);

open(PROJECTFILE) or die("Could not open $PROJECTFILE file.");
foreach $projname() {
($name,$dummy)=split('\n',$projname);
my $existingValue=$File1Map{"$name"};
my $newValue=$File2Map{"$name"};

if (! defined $existingValue)
{
$existingValue = ",,,,,,,,";
}
if (! defined $newValue)
{
$newValue = "\n";
}

print "" . $name . "," . $existingValue . "," . $newValue;
}

Thursday 18 September, 2008

AWK IF ELSE block



nawk -F"," ' BEGIN{OFS=","} {if (NR==1) {
print "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx"
} else {
print $1,$2,$3,$4,"S",1
print $1,$2,$3,$5,"S",2
print $1,$2,$3,$6,"D",1
print $1,$2,$3,$7,"D",2
}}' in.txt

Extract total number of requests hits

Below is the command to calculate no of time projects accessed using the url. The informations are retrieved from log file where all the requests are logged.



cut -d ' ' -f 7 ssl_request_log |grep '/sf/'|cut -d '/' -f 3|sort|uniq -c

cut -d ' ' -f 7 ssl_request_log |grep '/sf/'|sort|uniq -c



Sample log file

[31/Aug/2008:05:50:08 +0200] 0.0.0.0 SSLv3 RC4-MD5 "GET /asf-help/RoboHelp_CSH.js HTTP/1.1" 7677
[31/Aug/2008:05:50:08 +0200] 0.0.0.0 SSLv3 RC4-MD5 "GET /wsf-images/masthead/logo.gif HTTP/1.1" 2430
[31/Aug/2008:05:50:09 +0200] 0.0.0.0 SSLv3 RC4-MD5 "GET /fsf-images/masthead/dropdown.gif HTTP/1.1" 49
[31/Aug/2008:05:50:09 +0200] 0.0.0.0 SSLv3 RC4-MD5 "GET /nsf-images/masthead/help.gif HTTP/1.1" 402

Wednesday 10 September, 2008

Script to copy contents of file between two specific line patterns

Here is the script code to copy contents of files between two line patterns.

For example:
To copy the contents of file starting between the particular date and end date.



if [ $# -ne 3 ]; then
echo 1>&2 Usage: $0 file_to_parsed starting_pattern ending_pattern
exit 127
fi

startline=`grep -ni -m1 $2 $1 | awk -F ":" '{print $1}'`
endline=`grep -ni $3 $1 | tail -n1 | awk -F ":" '{print $1}'`

totallineno=`expr $endline - $startline + 1`
echo "Extracting contents between" $2 " and " $3
echo "No of lines found: " $totallineno
echo "Copying contents to temp file: " temp_file
echo "`head -n$endline $1 | tail -n $totallineno`" > temp_file
echo "Done copying."

Convert your DVD to other suitable formats

Need to convert your dvd?.
Below is the command to do it.


for i in 3 4 5; // loop dvd chapters
do
mencoder dvd://1 -chapter $i-$i -o chapter$i.avi -oac mp3lame -ovc lavc -vf scale=320:240 -lavcopts vmax_b_frames=0:acodec=mp3:abitrate=128:vbitrate=256 -ofps 30;
done


Going through the loop makes our job easy for all the chapters.

AWK to calculate sum, min, max of particular field in txt

Recently, I came across situation to calculate the sum, min, max of particular field using awk.
Below is the snippet I've used to calculate the field values.


awk 'BEGIN{
FS=OFS=",";
max = -999999999; min = 9999999999;
print "----------------------------"
}
{
sub(" Total requests: ","",$2);
val = int($2);
lines+=1;sum+=val;
if( val > max ) max = val;
if ( val < min ) min = val;
}
END{
print "| Total No of Lines: "lines " Sum: " sum " Average: " sum/lines " Max: " max " Min: " min
}' with_out.txt


Text File Contents for parsing:


09/Sep/2008:03:57:09, Total requests: 36, New requests: 34
09/Sep/2008:03:57:10, Total requests: 61, New requests: 58
09/Sep/2008:03:57:11, Total requests: 40, New requests: 38
09/Sep/2008:03:57:12, Total requests: 101, New requests: 97
09/Sep/2008:03:57:13, Total requests: 33, New requests: 35