Updates from July, 2011 Toggle Comment Threads | Keyboard Shortcuts

  • Urban 23:54 on 11 Jul. 2011 Permalink |  

    Server status via SMS 

    Stormy weather makes me worried about my electronic equipment, especially servers. A summer storm can take out power, corrupt a HDD in the process, or in the worst case, fry absolutely everything by a strategically placed lightning strike.

    But even on a quiet winter day, hard disk can quietly fill with logs, one byte at a time, finally causing a bunch of services to die (I’m looking at you, Mysql).

    The point is, anything can happen and you have to check often to prevent an unplanned downtime, which can sometimes extend significantly because no-one checked1. And checking gets more tedious the more stuff you have. So this post describes a quick&dirty method, tailored to my needs, on the cheap.

    A quick disclaimer: I do know about Pingdom and similar services, as well as Nagios. However, I wanted a simple lightweight script, which runs on my home server behind a firewall, and queries my public servers over HTTP. Then it sends me a text, once daily.

    This is how it works:

    • my home server acts as an agent and tries to contact all the public servers
    • each server performs a quick and dirty on-demand self-test, responding with a message that reports the available disk space, mysql status, etc.
    • python script on the home server aggregates all the responses into a concise form, suitable for texting
    • and sends it, once a day.

    To test the public servers, I created a simple PHP script which performs some basic tasks a healthy server should be able to do, and returns a response. This is the version 1:

    
    A<?php
        /*A stands for Apache.
          If only "A" gets served,
          the web server is alive, but PHP doesn't work. */
     
        /*print P (short for PHP OK),
          followed by GBs of free disk space */
        echo "P[" . floor(disk_free_space("/") / 1024 / 1024 / 1024) . "G]";
    
    
        /*try connecting to MySQL */
        mysql_connect("server", "user", "pass")
            or die("db_down");
        mysql_select_db("mysql")
            or die("db_err");
      
        /*Print a short string ("My") through the
          database to prove it works.*/
        $res = mysql_query("select 'My' as my from user");
        $row = mysql_fetch_assoc($res);
        echo $row["my"];
    
    
        /*if everything works ok, this should
          return something like AP[10G]My
          which simply means apache works ok,
          php interpreter is fine,
          there's 10G of free disk
          and mysql is fine. */
    ?>
    
    

    I wanted to get this summary delivered as a SMS notification. Twillio looked promising, but in the end wouldn’t take my CC, so I used this awesome hack to send free text in the form of a Google Calendar reminder.

    I extended the provided Python script to also get the free disk space of my home server and wrap everything up as a single SMS. The result looks something like this:

    Reminder: srv1_ok! c:1018M d:709054M srv2:AP[10G]My srv3:AP[2G]My

    This was at first meant to be just a form of “infrastructure unit test”, returning true if everything checks out OK, and false otherwise. But since there was room for additional info, I added more to fill the message.

    I’ve been somewhat reluctant to set up different notifications in the past, because that basically means more spam. But since I did it once at work (via e-mail), it’s been quite satisfying. An e-mail with the subject “Everything ok!” makes my day, and that’s what I get most of the time anyway.

    1. It’s just recently happened to me 🙂 []
     
  • Urban 14:02 on 14 May. 2011 Permalink |  

    Nuke Effects for iPad 

    Nuke Effects, an iPad version of the once famous Nuclear Bomb Effects Computer just hit the App Store.

    From the Nuke Effects description:

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    In 1960s, when the Cold War was at its peak and the world’s two superpowers were locked in a nuclear arms race, the U.S. Government Printing Office published a handy slide rule for estimating the effects of nuclear weapon explosions. It was available for $1.00 to anyone interested in “the study of effects data derived from nuclear weapons tests and from experiments designed to duplicate various characteristics of nuclear weapons”.

    Since then, many things have changed and this slide rule remains a relic of the past. Having become a collector’s item, it is only available for a handy sum on auction sites and as a rather impractical web-based version. Until now.

    This app is a faithful (although—for greater ease of reading—somewhat oversized) digital replica of the real thing. By responding to your touch much like the original plastic version, it brings this museum piece back to life.

    With it you can evaluate 28 different effects of nuclear weapons, of which 13 relate to blast, 5 to thermal radiation, 1 to initial nuclear radiation, 2 to early fallout, 6 to crater dimensions, and 1 to fireball dimensions. Most of the parameters are presented as functions of range and yield.

    The original slide rule was based on a book “The effects of nuclear weapons”. This hefty 700+ page publication (which is in public domain, as well as the slide rule itself) is available at the support site of this app, together with links to the resources used to create this app.

    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    This app is free (as in free beer). It was inspired by the Trigonir iPad App of my good colleague Dragan and by the web version of the slide rule at Fourmilab. The app was created using the materials, generously provided on the Fourmilab website, which you can also use to build your own paper&plastic version.

     
  • Urban 23:26 on 7 Apr. 2011 Permalink |  

    Efficient server backup 

    I do back up my servers, of course1. I rsync the image files (VM snapshots, since all servers are virtual) to a remote location. The good part about VM snapshots is that a consistent disk state can be copied while the machine is running, regardless of the OS and guest FS (though, snapshots alone are also possible on NTFS, Ext+LVM, ZFS and possibly many other file systems, but each case has to be handled individually).

    So I devised the following strategy: since this site lives on a modest Linux machine with a net footprint of only 2GB, I decided to copy it off-site every night (rsync also compresses the data, so only 1GB has to be copied). For other (less actively changing) servers (net sizes of 10GB and 60GB), I decided on weekly and monthly backups, respectively. I have enough storage2, so I decided to keep old copies to have some fun (or rather, to see how long I’d last with Google’s strategy of not throwing anything away, ever).

    2GB is clearly not much, but it adds up over time. All in all, adding up all images, it should come close to 2TB / year.

    Now, 2 TB is also not that much. A 2TB drive costs about 80 EUR today3, and it will cost half as much before the year is over. But it becomes incredibly annoying to have your hard drive filled with nearly identical backup copies, gobbling up space like a starving sumo wrestler.

    So my first idea to bring the number down was to compress the backups on the fly. I played with NTFS compression on the target server and was quite satisfied: the images shrunk to about 60% of the original size, which would yield 1.2TB/year. Better.

    It’s well known that NTFS compression is not very efficient, but it’s pretty fast. That’s a trade-off I don’t really need for what is basically a one-time archiving.

    So I tried to shrink about 100GB of mostly duplicate data (50 almost identical images of 2GB, taken 24 hrs apart over 50 days) with different compression and deduplication software.

    For starters, since I had great experience with WinRAR, I tried that. WinRAR has a special “solid archive” mode which treats all files combined as a single data stream, which proves beneficial in most cases. Countless hours later, I was disappointed. Data only shrunk to about 40%4, which is exactly the same as if compressing individual files without solid compression. Pretty much the same with tar+bzip25, which performed a tad worse with 44% compression.

    This is an excellent moment to ponder on the difference between compression and deduplication: while they may seem the same (they both reduce the size of a data stream by removing redundancy) and even operate in a similar manner (at least in my case, where different files were concatenated together using tar) they differ in the size of the compression window. Bzips of the world will obviously only use small windows to look for the similarities and redundancy, while deduplication ignores fine-grained similarities and tries to identify identical file system blocks (which usually range from 4KB to 128KB).

    So I figured deduplication might be worth a try. I found an experimental filesystem with great promise, called Opendedup (also SDFS). I got it to work almost without hiccups: it creates a file-based container which can be mounted as a volume6. However, it’s turned out to be somewhat unreliable and crashed a couple of times. I still got the tests done, however, and it significantly outperformed simple compression by bringing the data down to 20% (with block size 128k; it might perform even better with a block size of 4K, but that would take 32 times more memory  since all the hash tables for block lookup must be stored in RAM).

    Next stop: ZFS. I repeated the same test with ZFS@8K blocks, and it performed even better with over 6-fold reduction in size (likely a combination of smaller block size and added gzip-9 block compression. Opendedup, however, doesn’t support compression) — the resulting images were only around 15% the original size.

    Impressive, but on the other hand, what the heck? There’s no way 15GB of unique data has misteriously appeared in 50 days within this 2GB volume. The machine is mostly idle, doing nothing but appending data to a couple of log files.

    I’ve read much about ZFS block alignment, and it still made no sense. Files withing the VM image should not just move around the FS, so where could the difference come from? As the last option, I tried xdelta, a pretty efficient binary diff utility. It yielded about 200MB of changes between successive images, which would bring 100GB down to 12% (2GB + 49 x 200MB).

    Could there really be that much difference in the images? I suspected extfs’ atime attribute, which would (presumably) change a lot of data by just reading files. It would make perfect sense. So I turned it off7. Unfortunately, there was no noticeable improvement.

    Then I tried to snapshot the system state every couple of hours. Lo and behold, deltas of only 100K most of the day, and then suddenly, a 200MB spike. It finally occurred to me that I’ve been doing some internal dumping (mysqldump + a tar of /var/www) every night as a cron job, yielding more than 200MB of data, which I’d SCP somewhere else and then immediately delete. And while deleting it does virtually free the disk space, the data’s still there — in a kind of digital limbo.

    So I tried to zero out the empty space, like this (this writes a 100MB file full of zeros; change the count to fill most of the disk; when done, delete it):

    dd if=/dev/zero of=zero.file bs=1024 count=102400
    rm zero.file

    The results were immediate: the backup snapshot itself shrunk to 1.7GB. And what’s more important: the successive backups differed by only about 700K (using xdelta again). My first impression is that Opendedup and ZFS should also do a much better job now.

    But having gone through all that trouble, I’ve found myself longing for a more elegant solution. And in the long run, there really is just one option, which will likely become prevalent in a couple of years: to make efficient backups, you need an advanced filesystem which allows you to backup only the snapshot deltas. Today, this means ZFS8. And there’s not that many options if you want ZFS. There’s OpenIndiana (former OpensSolaris), which I hate. It’s bloated, eats RAM for breakfast and has an awfully unfamiliar service management. And to top all that, it doesn’t easily support everything I need (which is: loads of Debian packages).

    Then there’s ZFS fuse, and ZFS on BSD. Both lag behind latest zpool versions singnificantly, not to mention slow I/O speeds. I just wouldn’t put that into production.

    Luckily, there’s one last option: Nexenta Core — Solaris kernel with GNU userland. I’ve seen some pretty good reviews which convinced me to give it a try, and I’ve been running it for a couple of weeks without problems. And if it delivers on its promise, it really is the best of two worlds. Included zpool (version 26) supports deduplication, endless snapshots, data integrity checking, and the ability to send an entire filesystem snapshot or just deltas with built-in zfs send. And the best part, compared to OpenIndiana: everything I need installs or compiles just fine. Expect to hear more about this.

    1. last week was World Backup Day 2011 []
    2. two 2TB WD elements attached to a recycled EEE701 netbook []
    3. look here []
    4. WinRAR 3.92, solid archive, compression=rar, method=best []
    5. simple tar cjvf, bzip2 v1.05 []
    6. I even succeeded mounting it as a drive under Windows, using Dokan, a windows version of FUSE []
    7. adding noatime flag in /etc/fstab and remounting the filesystem []
    8. I’m eagerly awaiting BTRFS, but currently it’s pretty much in its infancy []
     
    • dare 14:36 on 14 Apr. 2011 Permalink

      wow, you’re really into this sysadmin sheit 🙂

  • Urban 02:51 on 1 Feb. 2011 Permalink |  

    Electrons 

    As promised, Electrons is in the App store, and it’s just amazing.

    It’s a charged particle simulator for iPad. It allows you to create dozens of positively or negatively charged particles, either freely roaming in space, or contained within conducting bodies. You can observe complex particle interactions and resulting electric forces, create capacitors, simulate a lightning rod, a cathode ray tube (CRT) and much, much more. Through play, you can effortlessly gain deeper understanding of many natural phenomena. Following the included guided tour of 10 experiments will give you further insights into the world of electricity.

    A perfect companion for students and teachers of physics and electrical engineering, or anyone interested in understanding one of the four fundamental forces — the one without which there would be no lightbulbs or elevators, no radio and television, no computers, no Internet, and for that matter—no life.

    Electrons app includes an 11-page guide, explaining the basics of electric forces and Coulomb’s law, and provides 10 guided experiments, which you can try on your own. By following them, you will systematically unravel many of the seemingly puzzling mysteries of nature.

    By following the guide, you can:
    ⊕ Get acquainted with the basics of attractive and repulsive forces.
    ⊕ Learn why electric field inside conductors equals zero.
    ⊕ Learn why electric field is stronger in corners and pointy edges.
    ⊕ Simulate an electrostatic shock (redistribution of charge).
    ⊕ Create a capacitor and observe its homogenous electric field.
    ⊕ Learn how to create a do-it-yourself electric field probe.
    ⊕ Learn how to neutralize electric field.
    ⊕ Demonstrate how a cathode ray tube deflects particles.
    ⊕ Simulate a lightning rod and observe how it “attracts” lightning.

    Disclaimer
    Such great teaching aid could not be possible without a true visionary — my professor of Fundamentals of Electrical Engineering (about 10 years ago), late prof. Vojko Valenčič. He has, at the turn of the millenium, envisioned and developed a simulator, ten times more powerful than the Electrons. It was called JaCoB, and is still available freely at jacob.fe.uni-lj.si. Although JaCoB source code is available under GNU GPL, it has not been used in any way in development of the Electrons, which is purely an extension of Gravity Lab. Solely the concepts that prof. Valenčič taught, explained and demonstrated during his courses, and the basic idea behind JaCoB — to make learning of science fun — were used in making of this project. I sincerely wish someone will find it at least a bit as useful as I did JaCoB.

     
    • Roman 09:29 on 1 Feb. 2011 Permalink

      Ne samo da ej aplikacija odlična ima tudi dodano vrednost. Dejansko si oživil Vojkotovo vizijo. Osnove v vsako vas.

      Bravo

    • Urban 22:17 on 1 Feb. 2011 Permalink

      Hehe, hvala. 🙂 A si ga ti tud mel? Tale JaCoB je bil res uporabna reč, pa precej pred svojim časom.. Me zanima, če ga dons še kdo uporablja pri osnovah..

  • Urban 22:31 on 19 Jan. 2011 Permalink |  

    Beyond gravity 

    With Gravity Lab, I’ve built what I think is a decent particle simulation framework. Which brings me to my hidden agenda: something I’ve wanted to do for a long time, but could only do incrementally, since the complexity of the entire task was just too overwhelming. So I present to you another particle simulator, codenamed Charges.

    It was a no-brainer, really. Ok, sure — there’s no money in educational apps. There’s that. But the thing was practically already done. Electric forces are just like gravitational forces: obeying inverse square law (read more here, or here). There’s just a matter of inverting the polarity, i.e., introducing “negative masses”. And change some constants.

    Or so I thought.

    It turns out a charged particle simulator is pretty useless by itself: particles always recombine (i.e., neutralize each other), or fly far away from each other until they get garbage-collected.

    What I needed were solid (metallic) objects, which could trap the particles — so I could observe the forces, make capacitors, simulate cathode ray tubes (CRT), and generally bring the “static” into “electrostatic”. So I decided to implement objects. And this is an epic journey of a developer, struggling with an interesting problem and generating much flow™ in the process. 🙂

    Simple enough, I decided to support rectangles and circles only. Why? Because this way, it can be quite easily checked whether a particle has hit the wall. When you tap the screen, I simply loop through an array of all objects and check if you tapped inside a body. If I find that to be true, I set the parent of the particle to the ID of the body. Then, when calculating the motion of the particle (which is done by summing all the forces of all the other particles), I only need to check if the particle has hit the parent object’s wall. If parent object is a rectangle, it’s really simple, like this:

    if rectangle
        if parent.left_wall.x < particle.x < parent.right_wall.x
            move freely (left or right)
        else 
            dont move along x axis
        end
    
        if parent.bottom_wall.y < particle.y < parent.top_wall.y
            move freely (up or down)
        else
            dont move along y axis
        end
    end
    

    With circles, I’ve already hit the first obstacle. You can’t separate x and y coordinate checking into separate conditions, because they are dependent. I tried many options without real success and particles always got stuck in some kind of deadlocked state. What finally proved to be the most efficient solution, was checking if the particle’s future position is too far away from the circle center (more than circle radius away), and projecting it back onto the circle boundary. Like this:

    if circle
        if (x^2 + y^2 < circle.radius^2)   
            move freely along x and y
        else
            phi = atan2(y,x)
            x = circle.radius * cos(phi)
            y = circle.radius * cos(phi)
        end
    end
    

    It worked like a charm. But I faced a more dire problem, one that could not be solved in my limited particle-has-a-single-parent model. Namely, my particles couldn’t migrate from one parent to the next. Which would, of course, happen in nature: if you bring together a charged metallic object and an uncharged metallic object, some of the charge from the first one will be forced out into the second one.

    I wanted that, and it obviously couldn’t be done. Indeed, the composite objects made of circles and rectangles can become quite complex; how could one force the particles to stay within an object in such simple terms as shown above?

    I slept on it, and slept some more. And for the first time I can remember, the solution suddenly blinded me one morning. Yesterday’s morning, that is. And it goes like this.

    Timesharing.

    Let me explain. If a particle happens to be inside an overlap of two or more metallic objects (i.e, inside two or more objects at the same time), cycle through all of them every n frames of the animation. Let them all be parents, but not at the same time. And let the electrostatic forces that drive the particles away from each other do all the work.

    The bottom right particle in the picture above desperately wants to break away, to the South-East direction (down and to the right). If we bring in another body, the particle now has 2 parents (overlap). We iterate through both of them and let the particle just savor the moment for a while. And the moment it is assigned to object 2 above, it is propelled towards South-East, no longer bound by the first object. When there’s no more overlap, the time-sharing doesn’t happen any more. Voila.

    So hopefully, a new and amazing simulator will soon hit the App store. Stay tuned.

     
    • Bozo 19:44 on 20 Jan. 2011 Permalink

      ne ne ne… nisi še končal z gravitacijo :))) včeri zvečer sem neki vidu v enem dokumentarcu pa moraš to sprogramirat!!!… drugač pa enkrat pridem, pa da mi to vse pokažeš!

    • Urban 23:27 on 20 Jan. 2011 Permalink

      Dej posharej kej na deliciousu :p

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel