Updates from May, 2011 Toggle Comment Threads | Keyboard Shortcuts

  • Urban 00:07 on 12 May. 2011 Permalink |  

    A little home server that could 

    My home server is an old EEE 701 netbook with a 2TB WD elements attached. My selection process was mostly guided by three factors: I wanted a small and silent system with low power consumption.

    I was considering many alternatives, among them a PC Engines Alix machine (500MHz Geode + 256MB RAM) with only 3-5 watts of power consumption, but eventually I decided on a 7″ EEE, for the following reasons:

    • integrated UPS (laptop battery with >2hrs of life; saved me more than once)
    • integrated monitor and keyboard (priceless when things get dirty)
    • expandable RAM (I installed 1GB)
    • low power consumption (with disabled wifi, screen and camera) at 7-9W
    • slightly faster CPU than Alix (EEE is underclocked from 900 to 600MHz, which should extend battery life. I played with overclocking, but concluded it was not worth the fear of an occasional unexpected freeze)
    • much lower power consumption than modern-day quad cores, but just as much CPU power as high-end PCs 10 years ago (much more is not needed for a file server anyway)
    • small form factor
    • cheap (can obtain a second-hand replacement for a small price, should one be needed)
    • had one lying around anyway, collecting dust.

    I run on it a n-lited Windows XP. I gave this a lot of thought and was deciding between XP, Ubuntu and the original Xandros. Still have two of them installed (on separate media: internal SSD and a SD card), but eventually XP won because of the following reasons:

    • On-the-fly NTFS compression (reduces used storage on 4GB SSD system disk from 3.9GB to 2.8GB and my data from 1500 to 1200GB; in other words: with more than 1GB of junk installed in program files, the 4GB internal drive is still only 65% full; NTFS also supports shadow copies of open files and hard links for rsync link farm backups).
    • Some windows-only services and utils I use (EyeFi server, Fitbit uploader, Total CommanderWinDirStat, etc.).
    • Actually *working* remote desktop (as compared to various poor-performing VNC-based solutions).
    • N-lited XP (with most of unnecessary services completely removed) has a memory footprint of <100 MB and easily clocks 6 months of uptime.

    I do occasionally regret my choice, but for now familiarity and convenience still trump unrealized possibilities. But upcoming BTRFS might be just enough to tip the scale in favor of Linux.

     

    What I run on it

    • Filezilla server for remote FTP access (I use FTP-SSL only)
    • Rsync server (Deltacopy) to efficiently sync stuff from remote locations
    • WinSSHD to scp stuff from remote locations
    • Remote desktop server for local access
    • Windows/samba shares for easy local file access
    • Firefly for exposing MP3 library to PC/Mac computers running iTunes (v1586 works best, later versions seem to hang randomly)
    • A cloud backup service to push the most important stuff to the cloud
    • EyeFi server, so photos automatically sync from camera to the server
    • “Time Machine server”, which is nothing but a .sparsebundle on a network share, allowing me to backup my Mac machines (tutorial)
    • Gmailbackup, which executes as a scheduled task.

     

    Some other random considerations and notes

    • Disable swap file when using SSD (especially such a small one).
    • No more partitions for large drives: I just “split” the 2TB drive into dirs. The times of partitioning HDDs are long gone and such actions brought nothing but pain when a small partition was getting full and needed resizing. Think of dirs as of “dynamic pools”.
    • No WiFi means no reliability issues, more bandwidth, more free spectrum for other users and more security.
    • WD Elements runs 10ºC cooler than WD Mybook when laid horizontally and is also cheaper.
    • Use Truecrypt containers to mount sensitive stuff. When machine is rebooted, they are unmounted and useless until the password is entered again.
    • Use PKI auth instead of keyboard-interactive authentication for publicly open remote connections.
    • Use firewall to allow only connections from trusted sources.

    In any case, this is just a file/media/backup server and I’ve been quite satisfied with it so far (I’ve been using it for over a year). However, all my web servers are virtual, hosted elsewhere and regularly rsynced to this one.

     
    • dare 19:30 on 12 May. 2011 Permalink

      i’m impressed. se posebej, da mas cas merit temperaturo diska v razlicnih legah 🙂
      seriously, mene si preprical. pravis, da se dobijo rabljeni po 100 EUR?

      ej, kaj ce bi tole cross-postal na Obrlizg? men se zdi ful fajn geeky stuff

    • wujetz 08:58 on 13 May. 2011 Permalink

      mene tud preprical…

      sam eno vprasanje – a je zadeva dovolj mocna, da postavis gor se kak game server (BF, CS,…)?

    • Urban 21:58 on 13 May. 2011 Permalink

      @dare: tole s temperaturo sem odkril slucajno, ker vsake toliko zalaufam odlicen in zastonj CrystalDiskInfo (pove kdaj se v SMART zacnejo nabirat napake, kar pomeni da bo disk odletu). Tam je blo pa z velikimi rdecimi crkami pisalo 60ºC, kar ni good; WD Elements ne pride cez 45ºC.

      za rabljene poglej na bolho; zdele lih edn prodaja za 99eur celo z 2GB rama..

      @wujetz: nimam pojma kok to pozre cpuja; stock frekveca je underclockana na 630MHz, normalna je nekih 900MHz, lahko ga pa navijes celo do 1GHz (enostavno izberes brzino iz menija odlicnega utilitija http://www.cpp.in/dev/eeectl/ ).
      Giga je ze solidna brzina, ampak za to mors met neko mal bolse hlajenje (vsaj ventilator na max)

      za kej vec info prever forum zagrizenih userjev: http://forum.eeeuser.com/viewforum.php?id=3

  • Urban 23:26 on 7 Apr. 2011 Permalink |  

    Efficient server backup 

    I do back up my servers, of course1. I rsync the image files (VM snapshots, since all servers are virtual) to a remote location. The good part about VM snapshots is that a consistent disk state can be copied while the machine is running, regardless of the OS and guest FS (though, snapshots alone are also possible on NTFS, Ext+LVM, ZFS and possibly many other file systems, but each case has to be handled individually).

    So I devised the following strategy: since this site lives on a modest Linux machine with a net footprint of only 2GB, I decided to copy it off-site every night (rsync also compresses the data, so only 1GB has to be copied). For other (less actively changing) servers (net sizes of 10GB and 60GB), I decided on weekly and monthly backups, respectively. I have enough storage2, so I decided to keep old copies to have some fun (or rather, to see how long I’d last with Google’s strategy of not throwing anything away, ever).

    2GB is clearly not much, but it adds up over time. All in all, adding up all images, it should come close to 2TB / year.

    Now, 2 TB is also not that much. A 2TB drive costs about 80 EUR today3, and it will cost half as much before the year is over. But it becomes incredibly annoying to have your hard drive filled with nearly identical backup copies, gobbling up space like a starving sumo wrestler.

    So my first idea to bring the number down was to compress the backups on the fly. I played with NTFS compression on the target server and was quite satisfied: the images shrunk to about 60% of the original size, which would yield 1.2TB/year. Better.

    It’s well known that NTFS compression is not very efficient, but it’s pretty fast. That’s a trade-off I don’t really need for what is basically a one-time archiving.

    So I tried to shrink about 100GB of mostly duplicate data (50 almost identical images of 2GB, taken 24 hrs apart over 50 days) with different compression and deduplication software.

    For starters, since I had great experience with WinRAR, I tried that. WinRAR has a special “solid archive” mode which treats all files combined as a single data stream, which proves beneficial in most cases. Countless hours later, I was disappointed. Data only shrunk to about 40%4, which is exactly the same as if compressing individual files without solid compression. Pretty much the same with tar+bzip25, which performed a tad worse with 44% compression.

    This is an excellent moment to ponder on the difference between compression and deduplication: while they may seem the same (they both reduce the size of a data stream by removing redundancy) and even operate in a similar manner (at least in my case, where different files were concatenated together using tar) they differ in the size of the compression window. Bzips of the world will obviously only use small windows to look for the similarities and redundancy, while deduplication ignores fine-grained similarities and tries to identify identical file system blocks (which usually range from 4KB to 128KB).

    So I figured deduplication might be worth a try. I found an experimental filesystem with great promise, called Opendedup (also SDFS). I got it to work almost without hiccups: it creates a file-based container which can be mounted as a volume6. However, it’s turned out to be somewhat unreliable and crashed a couple of times. I still got the tests done, however, and it significantly outperformed simple compression by bringing the data down to 20% (with block size 128k; it might perform even better with a block size of 4K, but that would take 32 times more memory  since all the hash tables for block lookup must be stored in RAM).

    Next stop: ZFS. I repeated the same test with ZFS@8K blocks, and it performed even better with over 6-fold reduction in size (likely a combination of smaller block size and added gzip-9 block compression. Opendedup, however, doesn’t support compression) — the resulting images were only around 15% the original size.

    Impressive, but on the other hand, what the heck? There’s no way 15GB of unique data has misteriously appeared in 50 days within this 2GB volume. The machine is mostly idle, doing nothing but appending data to a couple of log files.

    I’ve read much about ZFS block alignment, and it still made no sense. Files withing the VM image should not just move around the FS, so where could the difference come from? As the last option, I tried xdelta, a pretty efficient binary diff utility. It yielded about 200MB of changes between successive images, which would bring 100GB down to 12% (2GB + 49 x 200MB).

    Could there really be that much difference in the images? I suspected extfs’ atime attribute, which would (presumably) change a lot of data by just reading files. It would make perfect sense. So I turned it off7. Unfortunately, there was no noticeable improvement.

    Then I tried to snapshot the system state every couple of hours. Lo and behold, deltas of only 100K most of the day, and then suddenly, a 200MB spike. It finally occurred to me that I’ve been doing some internal dumping (mysqldump + a tar of /var/www) every night as a cron job, yielding more than 200MB of data, which I’d SCP somewhere else and then immediately delete. And while deleting it does virtually free the disk space, the data’s still there — in a kind of digital limbo.

    So I tried to zero out the empty space, like this (this writes a 100MB file full of zeros; change the count to fill most of the disk; when done, delete it):

    dd if=/dev/zero of=zero.file bs=1024 count=102400
    rm zero.file

    The results were immediate: the backup snapshot itself shrunk to 1.7GB. And what’s more important: the successive backups differed by only about 700K (using xdelta again). My first impression is that Opendedup and ZFS should also do a much better job now.

    But having gone through all that trouble, I’ve found myself longing for a more elegant solution. And in the long run, there really is just one option, which will likely become prevalent in a couple of years: to make efficient backups, you need an advanced filesystem which allows you to backup only the snapshot deltas. Today, this means ZFS8. And there’s not that many options if you want ZFS. There’s OpenIndiana (former OpensSolaris), which I hate. It’s bloated, eats RAM for breakfast and has an awfully unfamiliar service management. And to top all that, it doesn’t easily support everything I need (which is: loads of Debian packages).

    Then there’s ZFS fuse, and ZFS on BSD. Both lag behind latest zpool versions singnificantly, not to mention slow I/O speeds. I just wouldn’t put that into production.

    Luckily, there’s one last option: Nexenta Core — Solaris kernel with GNU userland. I’ve seen some pretty good reviews which convinced me to give it a try, and I’ve been running it for a couple of weeks without problems. And if it delivers on its promise, it really is the best of two worlds. Included zpool (version 26) supports deduplication, endless snapshots, data integrity checking, and the ability to send an entire filesystem snapshot or just deltas with built-in zfs send. And the best part, compared to OpenIndiana: everything I need installs or compiles just fine. Expect to hear more about this.

    1. last week was World Backup Day 2011 []
    2. two 2TB WD elements attached to a recycled EEE701 netbook []
    3. look here []
    4. WinRAR 3.92, solid archive, compression=rar, method=best []
    5. simple tar cjvf, bzip2 v1.05 []
    6. I even succeeded mounting it as a drive under Windows, using Dokan, a windows version of FUSE []
    7. adding noatime flag in /etc/fstab and remounting the filesystem []
    8. I’m eagerly awaiting BTRFS, but currently it’s pretty much in its infancy []
     
    • dare 14:36 on 14 Apr. 2011 Permalink

      wow, you’re really into this sysadmin sheit 🙂

  • Urban 02:51 on 1 Feb. 2011 Permalink |  

    Electrons 

    As promised, Electrons is in the App store, and it’s just amazing.

    It’s a charged particle simulator for iPad. It allows you to create dozens of positively or negatively charged particles, either freely roaming in space, or contained within conducting bodies. You can observe complex particle interactions and resulting electric forces, create capacitors, simulate a lightning rod, a cathode ray tube (CRT) and much, much more. Through play, you can effortlessly gain deeper understanding of many natural phenomena. Following the included guided tour of 10 experiments will give you further insights into the world of electricity.

    A perfect companion for students and teachers of physics and electrical engineering, or anyone interested in understanding one of the four fundamental forces — the one without which there would be no lightbulbs or elevators, no radio and television, no computers, no Internet, and for that matter—no life.

    Electrons app includes an 11-page guide, explaining the basics of electric forces and Coulomb’s law, and provides 10 guided experiments, which you can try on your own. By following them, you will systematically unravel many of the seemingly puzzling mysteries of nature.

    By following the guide, you can:
    ⊕ Get acquainted with the basics of attractive and repulsive forces.
    ⊕ Learn why electric field inside conductors equals zero.
    ⊕ Learn why electric field is stronger in corners and pointy edges.
    ⊕ Simulate an electrostatic shock (redistribution of charge).
    ⊕ Create a capacitor and observe its homogenous electric field.
    ⊕ Learn how to create a do-it-yourself electric field probe.
    ⊕ Learn how to neutralize electric field.
    ⊕ Demonstrate how a cathode ray tube deflects particles.
    ⊕ Simulate a lightning rod and observe how it “attracts” lightning.

    Disclaimer
    Such great teaching aid could not be possible without a true visionary — my professor of Fundamentals of Electrical Engineering (about 10 years ago), late prof. Vojko Valenčič. He has, at the turn of the millenium, envisioned and developed a simulator, ten times more powerful than the Electrons. It was called JaCoB, and is still available freely at jacob.fe.uni-lj.si. Although JaCoB source code is available under GNU GPL, it has not been used in any way in development of the Electrons, which is purely an extension of Gravity Lab. Solely the concepts that prof. Valenčič taught, explained and demonstrated during his courses, and the basic idea behind JaCoB — to make learning of science fun — were used in making of this project. I sincerely wish someone will find it at least a bit as useful as I did JaCoB.

     
    • Roman 09:29 on 1 Feb. 2011 Permalink

      Ne samo da ej aplikacija odlična ima tudi dodano vrednost. Dejansko si oživil Vojkotovo vizijo. Osnove v vsako vas.

      Bravo

    • Urban 22:17 on 1 Feb. 2011 Permalink

      Hehe, hvala. 🙂 A si ga ti tud mel? Tale JaCoB je bil res uporabna reč, pa precej pred svojim časom.. Me zanima, če ga dons še kdo uporablja pri osnovah..

  • Urban 22:31 on 19 Jan. 2011 Permalink |  

    Beyond gravity 

    With Gravity Lab, I’ve built what I think is a decent particle simulation framework. Which brings me to my hidden agenda: something I’ve wanted to do for a long time, but could only do incrementally, since the complexity of the entire task was just too overwhelming. So I present to you another particle simulator, codenamed Charges.

    It was a no-brainer, really. Ok, sure — there’s no money in educational apps. There’s that. But the thing was practically already done. Electric forces are just like gravitational forces: obeying inverse square law (read more here, or here). There’s just a matter of inverting the polarity, i.e., introducing “negative masses”. And change some constants.

    Or so I thought.

    It turns out a charged particle simulator is pretty useless by itself: particles always recombine (i.e., neutralize each other), or fly far away from each other until they get garbage-collected.

    What I needed were solid (metallic) objects, which could trap the particles — so I could observe the forces, make capacitors, simulate cathode ray tubes (CRT), and generally bring the “static” into “electrostatic”. So I decided to implement objects. And this is an epic journey of a developer, struggling with an interesting problem and generating much flow™ in the process. 🙂

    Simple enough, I decided to support rectangles and circles only. Why? Because this way, it can be quite easily checked whether a particle has hit the wall. When you tap the screen, I simply loop through an array of all objects and check if you tapped inside a body. If I find that to be true, I set the parent of the particle to the ID of the body. Then, when calculating the motion of the particle (which is done by summing all the forces of all the other particles), I only need to check if the particle has hit the parent object’s wall. If parent object is a rectangle, it’s really simple, like this:

    if rectangle
        if parent.left_wall.x < particle.x < parent.right_wall.x
            move freely (left or right)
        else 
            dont move along x axis
        end
    
        if parent.bottom_wall.y < particle.y < parent.top_wall.y
            move freely (up or down)
        else
            dont move along y axis
        end
    end
    

    With circles, I’ve already hit the first obstacle. You can’t separate x and y coordinate checking into separate conditions, because they are dependent. I tried many options without real success and particles always got stuck in some kind of deadlocked state. What finally proved to be the most efficient solution, was checking if the particle’s future position is too far away from the circle center (more than circle radius away), and projecting it back onto the circle boundary. Like this:

    if circle
        if (x^2 + y^2 < circle.radius^2)   
            move freely along x and y
        else
            phi = atan2(y,x)
            x = circle.radius * cos(phi)
            y = circle.radius * cos(phi)
        end
    end
    

    It worked like a charm. But I faced a more dire problem, one that could not be solved in my limited particle-has-a-single-parent model. Namely, my particles couldn’t migrate from one parent to the next. Which would, of course, happen in nature: if you bring together a charged metallic object and an uncharged metallic object, some of the charge from the first one will be forced out into the second one.

    I wanted that, and it obviously couldn’t be done. Indeed, the composite objects made of circles and rectangles can become quite complex; how could one force the particles to stay within an object in such simple terms as shown above?

    I slept on it, and slept some more. And for the first time I can remember, the solution suddenly blinded me one morning. Yesterday’s morning, that is. And it goes like this.

    Timesharing.

    Let me explain. If a particle happens to be inside an overlap of two or more metallic objects (i.e, inside two or more objects at the same time), cycle through all of them every n frames of the animation. Let them all be parents, but not at the same time. And let the electrostatic forces that drive the particles away from each other do all the work.

    The bottom right particle in the picture above desperately wants to break away, to the South-East direction (down and to the right). If we bring in another body, the particle now has 2 parents (overlap). We iterate through both of them and let the particle just savor the moment for a while. And the moment it is assigned to object 2 above, it is propelled towards South-East, no longer bound by the first object. When there’s no more overlap, the time-sharing doesn’t happen any more. Voila.

    So hopefully, a new and amazing simulator will soon hit the App store. Stay tuned.

     
    • Bozo 19:44 on 20 Jan. 2011 Permalink

      ne ne ne… nisi še končal z gravitacijo :))) včeri zvečer sem neki vidu v enem dokumentarcu pa moraš to sprogramirat!!!… drugač pa enkrat pridem, pa da mi to vse pokažeš!

    • Urban 23:27 on 20 Jan. 2011 Permalink

      Dej posharej kej na deliciousu :p

  • Urban 13:27 on 28 Nov. 2010 Permalink |  

    Gravity Lab for iPad 

    Gravity Lab is an iPad gravity simulator. It allows you to create massive bodies on a 2-D plane and set their initial velocities by dragging your finger. Their attraction is then simulated, which causes them to accelerate (and combine) according to Newton’s law of universal gravitation. Newton’s law states that every massive body in the universe attracts every other massive body; such attraction is proportional to masses of both bodies and inversely proportional to the square of the distance between them.

    All beings living on Earth’s surface become, through life’s experience, intimately acquainted with the force of gravity. Gravity keeps us from drifting freely into space. Yet there’s something extremely limiting in our perception: the only significant gravitational force that we can perceive is that of the Earth, which has mass approximately 60,000,000,000,000,000,000,000 times greater than an average human being standing on its surface. The gravitational attraction doesn’t feel very mutual (although the forces of attraction of both bodies are equal in size), which is due to the much higher mass and consequently higher inertia of the Earth.

    Gravitational force is also the weakest force in nature, either compared to electromagnetic, weak or strong force. Take a look at the comparison here. The strength of the gravitational attraction is for a factor of 1036 (or 1,000,000,000,000,000,000,000,000,000,000,000,000) times weaker than the electromagnetic force (which also works universally — that is, with unlimited range). The value of the gravitational constant G used to calculate the force in the Newton’s equation: F=Gm1m2/r2 is 0.00000000006674, which means the masses have to be extremely large and distances reasonably short for force F to become noticeable.

    That’s where a simulator can help. Generating large massive bodies has never been easier. The simulator then calculates and updates the forces between them in real-time and, besides helping you get a feeling of how gravity works on a larger and more massive scale, also allows you to perform different experiments and demonstrate various phenomena:

    • create your own solar system
    • demonstrate complex interaction patterns of multiple massive bodies
    • collide massive bodies and demonstrate the conservation of momentum
    • display vectors of acceleration due to gravitational forces and observe the acceleration of orbiting bodies
    • observe first two of the three Kepler’s laws of planetary motion
    • demonstrate gravitational slingshot (gravity assist) for increasing the spacecraft’s velocity, etc.

    More info and purchase:

     
    • dare 08:48 on 2 Dec. 2010 Permalink

      čestitam! a dela multitouch? na tistem flash-based gravity simulatorju nisem mogel dveh planetov v vzajemno orbito spravit…

    • božo 16:05 on 2 Dec. 2010 Permalink

      torej če pravilno razumem so ti potrdili zadevo – BRAVO!

    • Urban 21:14 on 2 Dec. 2010 Permalink

      hvala, hvala 🙂 res je 14 dni za čakat, se vid da so vsi navalili z updati ob releasu novega OS-a 🙂

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel