Wonders of ImageMagick

Monday, January 18, 2010 2 comments
ImageMagick is a wonderful tool to manipulate images. I have not used it much before.
I had few pictures from a colleague's wedding party. These were the originals taken with with digital camera. And the size of the images was huge., 71 photos taking up 150MB disk space. So, today I wanted to reduce the storage size by keeping the quality of the images same. Also, the resolution of the photos were high; e.g. 2592x1944 pixels and I don't need such high resolution for there photos.
So, I started searching on Google and found an article describing convert program from ImageMagick. I tried convert to test on few photos as prescribed
$ convert -sample 50%x50% [source photo] [output photo]
note there I used percentage, have not gone for a fixed size as different photos had different resolutions, most of them are in landscape but few are in portrait shape.
50% size worked good, quality of the images are good and size has been reduced to one-third.
Tried 40%, size became nearly one-fifth but pictures started showing differences and I was not happy with them.
Then I started to read about "sample", and found about "resample", "scale", "resize" and "adaptive-resize". Tried resize, adapative-resize and scale with 40% and voilĂ , scale showed the most acceptable quality with best size of the photo (one-fifth of original).
$ convert -scale 40%x40% DSC06437.JPG scale40.jpg
$ convert -resize 40%x40% DSC06437.JPG resize40.jpg
$ convert -adaptive-resize 40%x40% DSC06437.JPG adapresize40.jpg
And here is the result
$ ls -l
total 2536
-rw-r--r-- 1 imtiaz users  350066 2010-01-18 16:03 adapresize40.jpg
-rw-r--r-- 1 imtiaz users 1592974 2010-01-18 16:00 DSC06437.JPG
-rw-r--r-- 1 imtiaz users  330641 2010-01-18 16:02 resize40.jpg
-rw-r--r-- 1 imtiaz users  319089 2010-01-18 16:00 scale40.jpg
$ identify DSC06437.JPG 
DSC06437.JPG JPEG 2592x1944 2592x1944+0+0 8-bit DirectClass 1.519MiB 0.000u 0:00.000
$ identify DSC06437.jpg 
DSC06437.jpg JPEG 1037x778 1037x778+0+0 8-bit DirectClass 312KiB 0.000u 0:00.000
Here is the script I used to do the whole job in one go. Took only 42 seconds for 71 pictures.
for img in `echo *.JPG`; do
  newname=`echo $img | awk -F. '{print $1}'`;
  convert -scale 40%x40% $img output/$newname.jpg;
done;
See for yourself a SCALEd one @Flickr, click "All Sizes" for the RAW size.

History of a Logical Volume named ARCHIVE

Sunday, January 17, 2010 0 comments
This is the history of a storage location named "ARCHIVE" created today. I hope it will stay alive for few years in future.
This ARCHIVE partition is on a 320GB Samsung disk (HD321KJ) which nearly covers the whole disk. Only 1st 2GB is used as SWAP partition just in case a running Linux installation may need or a Live CD may use during boot.
I want to keep all backup on this partition and need to free up the other disks for OS installation which currently hold them.
I intended to give the whole disk for archiving, so a single partition in single PV would have sufficed. But I created 3 partitions of 100GB size and then added them in the volume group (VG). The rationale is if the hard disk started giving problem; such as inode corruption, physical damage, bad bocks, then usually that will affect particular sectors or blocks and not the whole disk. And I can move data of particular physical volume (PV) using "pvmove" to a smaller hard disk (e.g. 120GB) if needed. I hope this granularity will help me in future.
Also, I decided to go for ext4 as it is superior to ext3.
I have done all of the following using openSUSE 11.2 Live CD as it has the latest tools and good support for ext4 (default FS on openSUSE 11.2 is ext4). Though the partitions have been created on the running Debian Lenny installation I had but it does not ext4 partition creation yet. So, the rest is how I have done the things.

Disk Read Performance Comparison with hdparm

Saturday, January 16, 2010 1 comments
I currently did read performance test on all of hard disks using hdparm. Simple hdparm -t did the trick.
As expected the latest 500GB SATA2 disk from Samsung outperformed all but it also died very recently after just 1 year of service.

I found it very surprising that my current laptop (Dell Vostro 1400) hard disk is slower than my 5 year old 80GB disk on desktop. That laptop is just a year old and has a Fujitsu 250GB SATA1 disk. On the contrary, the desktop disk is 5 years old and its a Maxtor 80GB SATA1 disk. Seems the higher rotational speed in the Maxtor drive made the difference.

Also, the 120GB  SATA2 disk from Samsung is not that fast from the 80GB SATA1 disk !!!
Though I rely more on Hitachi or Maxtor (now Seagate) in terms of reliability than Samsung but in the Bangladeshi computer hardware market you don't always get what you want. You get only what is available at market at that moment. But that's another story.

Here is the performance graph and accompanied result data.

Disc Model
Performance (MB/sec)
Speed (rpm)
Buffer Size
FUJITSU MHY2250BH
53.50
5400
8 MB
Maxtor 6Y080M0
55.92
7200
8 MB
SAMSUNG HD120IJ
60.44
7200
8 MB
Hitachi HDT725032VLA360
73.15
7200
16 MB
SAMSUNG HD321KJ
76.83
7200
16 MB
SAMSUNG HD501LJ
79.82
7200
16 MB

All hdparm tests were done on various Linux distributions; Fujitsu on Laptop's LinuxMint 8, Maxtor on CentOS 5.4 and rest 4 on Debian Lenny.

500GB SATA2 Hard Disk Crashed

Sunday, January 10, 2010 0 comments
My primary hard disk, a SATA 2 500 GB from Samsung crashed.
Such a disaster. Just yesterday it was showing soft errors in smartctl. Today it refused to wake up.
Even today smartctl can get device information from the HDD.

Can we really rely on a hard disk. This one is not even 1 year old and a latest one.
Though fdisk, sfdisk still shows partition information and layout but PV and VG informations (LVM) does not show up.

I have no idea about such "hard" errors. what to do ?