10 Jul

New Kitten “Tinker Bell”

Our newest addition to our family – Little Tinkerbell which we officially adopted on my birthday. This little one was from the outside from a feral mom – she had an upper report infection and her eyes were loaded with junk. We took her in to the ER and she was checked out and loaded with IV with fluids to get her to health and with shot of antibiotics she transformed into one of the most loveable kitties. We’ve debated about keeping her since we already have a 2 year old female cat “Sarah” which at times can be a handful but we wonder would both get along ..  The little one off the bat loves her adopted older sister “Sarah” she rubs up on her with her tail and runs to play with while Sarah is a bit cold to her but I think she’s not used to having to share her space with anyone else besides the dog but she’s been the queen of the house for 2 years and now sharing the house with little one can prove to be a challenge. I’ve read online that introducing pets can be a challenge and we’ve done that we keep the little one isolated in her own room for the time being so that Sarah can still function. During this process I’ve decided i would do my best to blog about it something i wish i did with Sarah originally so that i could have notes of what we do right and wrong with her.

13 Jun

the android experiment trial

ios-7-preview-images-0002.PNG

So i’ve been holding off getting new phone in the hopes of waiting to see what Apple will introduce its next release of the i0s and Iphone hardware.  This Past monday Apple held its Developers Conference to show case up coming products including the new cylinder aka the mac pro workstation and new macbook air. We also got glimpse at the next OSX release Mavrick which features more ios features under the hood with few minor things. Now came time to reveal the next version of the ios the operating system that comes on the Iphone . As you can see from the screen shot above apple is heavily using the flat ui for this next version to stay with the trend of mobile apps which are using this format. I for one think its a total dumb idea they are taking design queues from Microsoft and few Other mobile vendors instead of doing what they did in the past which is be the cool kid in school with the fresh new look but instead they are just copying and altering few things.  So of the new so call features were things they pulled from the Android OS which was platform that was behind Ios in the beginning but now its Leap Years ahead of Apple’s Development i credit Google, HTC, Motorola and Samsung for spring this ahead to take on the mighty Apple Giant. From the preview and seeing a seeded dev1 version i for one can see that Apple is going to ruin this product with these so call display and ui enhancements.

So the next day i was at a local ATT store and decide to checkout the hype on the Samsung Galaxy S4 all i heard to date is how cool that phone is and how some iphone users started ditching to it. So i decided to pick one up and give a whirl to see what all the hype was about. Day 1 – I spent close to 4 hours trying to migrate my email, contact lists, calendar settings and few of my apps. The UI on the samsung device is great but when you’ve operated ios devices for 5+ years you kinda get use to that UI so it makes it a challenge to migrate to it. One of the Features i did like about the S4 was the Screen that thing is super huge and the video quality is beautiful makes watching videos and looking at images worth while.  The only draw backs i have about that device was the keyboard i tried the various 3rd party ones the google keyboard the samsung and last swift key one which everyone speaks volumes about but it still didn’t feel right for me when typing i spent more time correcting vs having it do it for me i guess it learns as you go and it hadn’t figured me out yet. EMail with providers like Icloud and Exchange can  be bit of challenge and event the supplied client looked confusing to me while the gmail client ruled it worked like the web app and it was easy to mange my email account on that.  Battery Life is quoted being the best on this Device but i didn’t see that i was half drained by mid morning just by using it for emails, twitter and streaming music over wifi on the iphone 4s i would be at half at around 2-3 in the afternoon but with this device it was 10am and i had to plug back in to charge it.  Another thing that bugged me was the fact Samsung and ATT load so much bloat ware on this phone which makes you appreciate the nexus phones from google which come with 0 bloat but there’s price tag for it.  I’m sure the android users will get at me saying you can root it and then strip out the bloat and get back the free space but – why should i have to do that it should just be nice and clean for me.  So once day 1 was completed i opted to just factory reset it and package it back up and return.  It’s great piece of hardware and i can see the really cool advancements from the Android OS but for me the workflow for email and basic things like calendar  didn’t work well for me so i went back to my old 4s. The camera feature on this phone looks great but i notice a ton of my pictures had ton of background noise while my iphone takes really stunning images not sure if it was settings or lighting issue but the Video Camera feature on it was very cool and reminded me of my old flip 720p.  I was able to return the device and pay a restocking fee but in the end i learned that i became what i use to make fun of the typical apple koolaid user fanboy.

 

07 May

MySQL Replication with Minimal Downtime Using Hot Copy for Linux

I was experimenting with techniques to initialize MySQL replication for both InnoDB and MyISAM tables without significant downtime. The idea of locking all tables and performing a backup to ship to the replica simply takes far too long. I remember coming across this utility which is sorta similar
To LVM’s snapshot but what if you’re in a situation where your system doesn’t live on LVM based volume then what ?  How does one snapshot mysql-data directory for building replica for replication?  What I found was a tool from Idera called R1Soft Hot Copy for Linux.
This little nifty tool hooks itself into the kernel and listens to the disk at a raw block level allowing
You to snapshot the system without the need for LVM.

What differentiates this process from a more standard approach is the employment of R1Soft Hot Copy. R1Soft Hot Copy is a tool that facilities the creation a snapshot of a block device. When changes to the original device occur only the differences are placed in the snapshot in a Copy-on-Write fashion (similar to VSS in Microsoft Windows). This allows an administrator to create a functional, mountable backup of an entire device almost instantly with very little effort.

I’m posting these instructions because I’d like some feedback not only on my adaptation, but also on the initial method. Feel free to use any of this information, but please be careful. It worked for me, but I’m not qualified to write authoritative tutorials on the subject.

 

Prerequisites and Requirements

I’m going to make the assumption that the reader knows how to setup MySQL replication using the methods outlined in the official documentation.

Also keep in mind that R1Soft Hot Copy is a Linux utility making this article not directly applicable to other operating systems.

Methods

The cost of not locking tables was a restart of the MySQL service itself on the master; meaning that even read queries were not able to be processed momentarily. My idea was to instead flush and lock tables in the standard fashion while creating the Hot Copy mount. That should allow read queries to still be processed and connection attempts to succeed. Writes will be temporarily blocked, but only briefly and clients should have an error free, albeit slower, experience.

Step 1: Install R1Soft Hot Copy

Use the instructions on Idera’s website to install Hot Copy and then run

# <span class="GINGER_SOFATWARE_noSuggestion GINGER_SOFATWARE_correct">hcp</span>-setup --get-module

on the master.

Step 2: Configure master

Enable binary logging on the master server and configure a server id in my.cnf.

<span class="GINGER_SOFATWARE_correct">log</span>-bin=mysql-bin
<span class="GINGER_SOFATWARE_correct">server</span>-id=10

On the master create a user specifically to be used for replication.
<span class="GINGER_SOFATWARE_correct">mysql</span>&gt; GRANT REPLICATION SLAVE ON *<span class="GINGER_SOFATWARE_correct">.</span>* TO 'repl'@'SLAVE_IP_OR_HOSTNAME' IDENTIFIED BY '<span class="GINGER_SOFATWARE_noSuggestion GINGER_SOFATWARE_correct">slavepass</span>';

Step 3: Create/mount a snapshot

Ensure mysql has flushed all data to disk and then lock tables so no writes can occur.

mysql> FLUSH TABLES WITH READ LOCK;

Obtain log coordinates. Record the values of the File and Position fields.

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000002 | 1234     |              |                  |
+------------------+----------+--------------+------------------+

Create and mount the snapshot on the master. Because all tables are locked the coordinates obtained above will be consistent with the data in the snapshot.

# hcp -o /dev/sda2

… where /dev/sda2 is the device containing the filesystem which houses the MySQL databases to be replicated. Watch the output for the resulting mount point. This process should take mere seconds.

Release locks on the tables. This will return operation on the master to normal.

mysql> UNLOCK TABLES;

Step 4: Shutdown the slave’s mysqld and copy the data

Run these commands on the slave:

# /etc/init.d/mysql stop
# rm -rf /var/lib/mysql
# rsync -avz root@MASTER_IP_OR_HOST:/var/hotcopy/sda2_hcp1/lib/mysql /var/lib/

… where /var/lib/mysql is an example path to MySQL’s data.

Step 5: Unmount the snapshot on the master

# hcp -r /dev/hcp1

Step 6: Configure the slave’s identity and start MySQL

Edit /etc/mysql/my.cnf on the slave and set a server id.

[mysqld]
server-id=20

# /etc/init.d/mysql start

Step 7: Configure and start slave

Now it’s time to point the slave at the master and start replication. The MASTER_LOG_FILE and MASTER_LOG_POS should be set to the File and Position fields recorded in Step 3.

mysql> CHANGE MASTER TO
    ->     MASTER_HOST='MASTER_IP_OR_HOST',
    ->     MASTER_USER='repl',
    ->     MASTER_PASSWORD='slavepass',
    ->     MASTER_LOG_FILE='mysql-bin.000002',
    ->     MASTER_LOG_POS=1234;

mysql> START SLAVE;

Conclusion

At this point replication should be running and the only major service interruption was that writes were blocked for a short period on the master.

There’s nothing fundamentally different in the finished product between replication setup in this fashion and a more typical dump-and-copy process. That means monitoring and maintenance should be quite standard.

08 Aug

Cord Cutting Story along with getting NFL Game Pass & Red Zone

You’ve heard this term as of recently due to the disputes with AMC + Dishnetwork and Viacom + DirectTV “Cord Cutting”

Definition – What does Cord Cutting mean?

Cord cutting refers to the process of cutting expensive cable connections in order to change to a low-cost TV channel subscription through over-the-air (OT) free broadcast through an antenna, or over-the-top (OTT) broadcast over the Internet. Cord cutting is a growing trend that is adversely affecting the cable industry.

Netflix, Apple TV and Hulu are some of the popular broadcasting services that encourage cord cutting. The cord cutting concept received a considerable amount of recognition beginning in 2010 as more Internet solutions became available. These broadcasters have convinced millions of cable and satellite subscribers to cut their cords and change to video streaming.

In the last 6 months I’ve spent most of my free time converting a spare low powered computer into my home’s very own Home Theater PC and Storage Server. This has been a trial and error process of finding which applications work well with the requirements that i have set which is basic ” Easy to follow UI , Remote Control Friendly, HD Quality and easy for my Girlfriend to Understand” . I’ve toyed around with Windows Media Center at 1st it seem to support my USB remote control but when it came time to play some mp4 or mkv’s i notice it would attempt to play but then fail. 2nd attempt with Xbmc – while at 1st it looks like it can be the end-all solution i learned at 1st when attempting to build Htpc you have to organize your media if its not organized you’ll won’t be able to enjoy all the features built into xbmc.  Getting the Media on the PC was the 1st step – I’ve always been a fan of the Usenet’s as its always been reliable source for obtaining Movies, Music and Tv Shows without the hassles of Torrents. Looking around various Cord Cutting Forums and Sites like Redit i‘ve discovered a neat little tool called “Sickbeard” which kinda reminds me of a web-based Tivo/PVR scheduler so once I got that all set up  and punched in the tv shows i enjoyed watching there were a few that I no longer could watch since dropping all premium  channels and going with basic over the air channels.

Granted I won’t be able to watch it when it airs but I can catch up on other things around the house and then watch my shows at a better time.  So far things have been great but one thing i will miss from having Satellite TV which was NFL Network & NFL RedZone channel – being a Die Hard NFL and NY Jets Fan so I’ve been researching this for a while and discovered by pure accident the NFL Gamepass which the NFL offers for those folks in Europe, Asia and South America. Think of it as the DirectTV version of the NFL Package where you’re able to watch all your games with features such as RedZone & Quad Screen which allows you to watch 4 different Games at the same time.

But one catches  you have to live outside of the United States to take advantage of this – it’s paid for service which I have no issues paying for it but due to licensing issues with Major Broadcasters and DirectTV I can see why the NFL isn’t doing this in the US.  So how do i work around this well this Summer’s Olympics I was super ticked how NBC was handling its Coverage so I read up on technique that a co-worker a few years ago mention to me of how he was able to watch out-of- market baseball games using a Dedicated Server and VPN   so i tried this out found myself i checked out the fine folks at StrongVPN they offer a vpn service with various servers across the globe its especially useful these days especially when your in country’s like China where your internet connection is blocked it comes in handy.  I signed up for yearly VPN standard Package which offers basic pptp & openvpn options which comes in handy with any mobile device or computer.

After signing up I got my welcome email which included links to download a VPN Client for the OpenVPN option along with user+pass for the PPTP option following few of the How to’s on strongvpn site I was up and running on UK Based VPN and able to watch the BBC iplayer Olympics Coverage which by far was the best it was lived and allowed me to scroll back to events I wasn’t able to watch due to the time differences. But this solved the key thing to my NFL dilemma  if i hopped on a German Based VPN i would be able to get NFL Network, Red Zone and my Games from the confront of my home without having to run out a get Directting expensive satellite tv service for it. So i went out a bought a PrePaid credit card for the amount to cover my NFL Subscription which was $200 for the year which included Archives, Redzone, Quad Screen, Nfl Network, Condensed Games all the fun stuff the nfl offers with this subscription. So I signed up while connected via the German VPN and  a few minutes later i had a welcome email from gamepass by nfl with my username + password so i connected and launched the game player which has HD quality streaming  i was streaming NFL network and so far no glitches few buffers but so far its good so i‘m looking forward to watching the nfl this year on this platform.

 

 

10 Jul

Using ImageMagick to Generate Thumbnails

Having worked with web developers over time you need something to sometimes  perform certain tasks that would just take way to long if I did them manually. Imagine having thousands of PDF documents and now all the sudden you want to generate a thumbnail for all of them. Sure you can open up the PDF and take a screenshot of it which then can be shrunk to a smaller size. Now do that 5000 times.

If you are riding on a Linux box this is way easier then one might think. All you need to do the job is ImageMagick. This can be easily installed if you are on a Debian system like Ubuntu. Run the following command on Ubuntu to install ImageMagick.

  1. <span id="GRmark_dab4762dce19ca1b868aa165c4250a50c613429f_sudo:0" class="GRcorrect">sudo</span> apt-get install imagemagick

     

In a Rhel Based system such as Centos/Fedora you would run the follow command to install ImageMagick.

<span id="GRmark_baa50899861fa02f64ca984b10773e59babee914_yum:0" class="GRcorrect">yum</span> install <span id="GRmark_baa50899861fa02f64ca984b10773e59babee914_imagemagick:1" class="GRcorrect">imagemagick</span>

The size of the file is really small and should be no more than 740KB. Now let us pretend that we are in a directory with a PDF document and we want to create a thumbnail image of every page in that document.

  1. <span id="GRmark_039c7013318e9624181f69587a0f68fc050f9bf9_convert:0" class="GRcorrect">convert</span> -thumbnail x300 test.pdf test.png

The above example takes our PDF called test.pdf and creates a thumbnail with a height of 300 pixels of every page in the document. The files in the directory should auto increment so you are left with test-0.png test-1.png and so on. When adding an x infront of the size it means we are specifying the height. We could specify an exact dimension by using the command below:

  1. <span id="GRmark_0540a0c5b31a25807190e1c527a80fcb209c5cf3_convert:0" class="GRcorrect">convert</span> -thumbnail 400x300 test.pdf test.png

Now in the above command the thumbs will be 400 width and 300 height. If you wanted to just specify a width and keep the length proportional use the following command.

  1. <span id="GRmark_a8d89d2652e127bf9bdcdb605d238bce0150aba1_convert:0" class="GRcorrect">convert</span> -thumbnail 400 test.pdf test.png

Now without an x we are just saying make the width 400 pixels wide and the length whatever is proportional.

Now lets say we wanted to only create a thumbnail of the first page or the so called cover of the PDF. This can easily be done with the next command.

  1. <span id="GRmark_c79eb08c16864a937df4e9b655fb4b0b7598edfb_convert:0" class="GRcorrect">convert</span> -thumbnail 180 test.pdf<span id="GRmark_c79eb08c16864a937df4e9b655fb4b0b7598edfb_[:1" class="GRcorrect">[</span>0] test.png

The added on bracket zero bracket means first page. If you wanted the 23rd page you would put a 22 in the brackets. This now will generate a thumbnail from test.pdf using its cover that is exactly 180 pixels wide and the length is whatever is proportional.

I hope this helped. convert is a nice tool for Linux if you need to do on the fly conversions. The above commands can easily put inside a script that loops through all your PDFs but that is outside the scope of this article.

10 Jul

Give write access to folders or directories to multiple users.

If you have a Linux server where the users home directory is used to serve files to the web then all the files under that directory are owned by the user. In order for you to manipulate those folder or directories through your PHP scripts, Apache needs to have permissions to that folder also.

If our folder structure looks like this:

/home/example.com/web/public.html/media

and we want to be able to write to media using PHP then media needs Apache as the owner of that directory.
<span id="GRmark_4eab131f4bd626e5841334443ae6e0f3e339ca68_sudo:0" class="GRcorrect">sudo</span> <span id="GRmark_4eab131f4bd626e5841334443ae6e0f3e339ca68_chown:1" class="GRcorrect">chown</span> -R <span id="GRmark_4eab131f4bd626e5841334443ae6e0f3e339ca68_apache:2" class="GRcorrect">apache</span><span id="GRmark_4eab131f4bd626e5841334443ae6e0f3e339ca68_::3" class="GRcorrect">:</span><span id="GRmark_4eab131f4bd626e5841334443ae6e0f3e339ca68_apache:4" class="GRcorrect">apache</span> /home/example.com/web/public.html/media

If we run the above command we will then be able to write to that folder using PHP. What happens though if we want to upload files to that folder using FTP or SFTP. We will get a permission denied error. To get around this issue we can add the user to the Apache group so that we can still upload and control the directory through our scripts. To get this accomplished run the following commands.

Add your user to the apache group

  1. # replace &lt;user&gt; with the account user name
  2. <span id="GRmark_5ceaf6c59acfa214f57dc9a8440c78a00f5cb213_sudo:0" class="GRcorrect">sudo</span> usermod -a -G <span id="GRmark_5ceaf6c59acfa214f57dc9a8440c78a00f5cb213_apache:1" class="GRcorrect">apache</span> &lt;user&gt;

Change the group of your directory to apache

  1. <span id="GRmark_d80b097f661206ed1a00aa61c13ae922663c8a12_sudo:0" class="GRcorrect">sudo</span> <span id="GRmark_d80b097f661206ed1a00aa61c13ae922663c8a12_chgrp:1" class="GRcorrect">chgrp</span> -R <span id="GRmark_d80b097f661206ed1a00aa61c13ae922663c8a12_apache:2" class="GRcorrect">apache</span> /home/example.com/web/public.html/media

And allow the group to write

<span id="GRmark_7726cfea64865c8896d0c7b97ec1b53a070d99e6_sudo:0" class="GRcorrect">sudo</span> <span id="GRmark_7726cfea64865c8896d0c7b97ec1b53a070d99e6_chmod:1" class="GRcorrect">chmod</span> -R g+w /home/example.com/web/public.html/media

Now you should be able to write to that directory using PHP and you should be able to upload through FTP or SFTP perfectly.

10 Jul

Find files that are writeable on the System by a User

When on a Linux system, and you want to find out what folders a user has write permissions to, use the following command:

find/ -uid 48 -ls 2&gt; /dev/null | grep -v /proc

The –uid switch holds the user id of the user you want to check for writable folders on your Linux system. If you do not know what the uid is of a user you can figure it out by following the steps below.
<span id="GRmark_97046c066c232042d4b1b4439e0fe5cded8c8339_id:0" class="GRcorrect">id</span> -u user

replace user with the username. If there is a specific reason you are looking for writable folders 😉 then a hint, you should be able to always write in the tmp directory.

  1. /tmp
  2. # or
  3. /dev/shm
10 Jul

Create bash server to run commands on using netcat

Create a Bash command server – You can send it scripts or commands to execute
Allow to launch nc like a daemon, in background until you still stop it.

while ( nc -l 1025 | bash &> : ) ; do : ; done &

For send script or commands from the client to the server, use nc too, like that :
cat script.sh | nc server 1025
echo “service openvpn restart” | nc server 1025
The loop’s inside doesn’t do anything, but we can add echo -e “\nCommand received\n” .

09 Jul

1st Post and a Tip Keeping files sync on multiple Linux Servers

Welcome to my blog its been a hot minute since i rambled about something so i leave you a cool system tool/trick of the trade..

Over the years i’ve always had issues with syncing data between multiple webserversand keeping it all in-sync with each other many people would say why not do some type of network storage like NFS/GlusterFS/GFS2 or even S3 but sometimes you just want that fast direct storage speed.  In the past i would just rig up Inotifyd to listen to folder and push the changes using Unison or even Just old Rsync which worked well but i recently came across Csync2 from the fine Folks at Linbit aka DRBD people.  Here’s detail description of what it does under the hood .. ” Csync2 keeps a little database (sqlite as default) which contains the state of each file. This means that whenever it gets invoked, it first updates the database – and only starts to connect to the nodes in case any files were added, modified or deleted. A massive win in the number of connections it needs to make to the nodes, as most of the time there won’t be any new files. And It’s also a lot faster in checking than a Rsync.”  

Installation and configuration

The installation should be easy in most of the Linux distributions, csync2 is included in the repository of Debian, Ubuntu, Fedora, Gentoo and is also available in external repository for Centos and red Hat Enterprise, so in general an install with your package manager should be enough to have it installed.

To have a good starting point for the configuration i suggest to read the linbit paper about csync2, this will give you all the info you need to manage and configure csync2.

But let’s see now what to do once you have the package installed on your nodes, in this examples I’ll use the path of a Debian distribution, if you have a different distribution they could change slightly.

1) Pre-shared Keys

Authentication is performed using the IP addresses and pre-shared-keys in Csync2 . Each synchronization group (a group of hosts that have one or more file in sync) in the config file must have exactly one key record specifying the file containing the preshared-key for this group. It is recommended to use a separate key for each synchronization group and only place a key file on those hosts which actually are members in the corresponding synchronization group.

The key file can be generated with the following command on your first node:

csync2 -k /etc/csync2.key

2) SSL certificate
Next you need to create an SSL certificate for the local Csync2 server. On your first node give these commands:

openssl genrsa -out /etc/csync2_ssl_key.pem 1024
openssl req -batch -new -key /etc/csync2_ssl_key.pem -out /etc/csync2_ssl_cert.csr
openssl x509 -req -days 3600 -in /etc/csync2_ssl_cert.csr -signkey /etc/csync2_ssl_key.pem -out /etc/csync2_ssl_cert.pem

3) Csync2 configuration file

On your first node create the file /etc/csync2.conf, in this example i want to keep in sync just 1 directory of 2 servers (node1 and node2):

group mycluster
{
        host node1;
        host node2;

        key /etc/csync2.key;

        include /www/htdocs;
        exclude *~ .*;
}

Host lists are specified using the host keyword. You can eighter specify the hosts in a whitespace seperated list or use an extra host statement for
each host. The hostnames used here must be the local hostnames of the cluster nodes.

4) Now copy all the files from the first node (node1) to the other with :

scp /etc/csync2* node2:/etc/

And restart on both nodes inetd (or xinetd if you use it) with the command:

 /etc/init.d/openbsd-inetd restart

5) First Sync

Start synchronization first on node1 then on node2, afther this you can setup a cronjob to do a periodic sync.

csync2 -xv

If you get conflicts or errors use -f option

This setup is enough to have 2 nodes and 1 directory in sync, you’ll have to put on the crontab of both nodes something like this :

*/2 * * * * csync2 -x

 

Actions following a sync

Each synchronization group may have any number of action sections. These action sections are used to specify shell commands which should be
executed after a file is synchronized that matches any of the specified patterns.The exec statement is used to specify the command which should be executed. Note that if multiple files matching the pattern are synced in one run, this command will only be executed once.

The special token %% in the command string is substituted with the list of files which triggered the command execution.

Example:

group g1 {
  host node1 node2;                          # hosts list
  key /etc/csync2.key_g1;                  # pre-shared key

  include /etc/xinetd.d;

  action {                                 
    pattern /etc/xinetd.d;
    exec "/etc/init.d/xinetd restart";
    logfile "/var/log/csync2_action.log";
  }

In this example every time a file in the path /etc/xinetd.d is changed we run the command /etc/init.d/xinetd restart

Common tasks of csync2

These are some common options and tasks that you can use from the command line:

Synchronize

csync2 -x

force local file to be newer (has to be followed by csync2 -x for synchronisation)

csync2 -f filename

Test if everything is in sync with all peers.

csync2 -T

As -T, but print the unified diffs.

csync2 -TT

verbose flag for all commands: -v, i.e.

csync2 -xv

dry-run flag for all commands: -d, i.e.

csync2 -xvd

Conclusions

Csync2 is a great tool if you want to keep filesystems in synchronization asynchronously, there are many other options, like declaring an host as slave only or using or not SSL in the connection between the nodes.