This is a python based command line interface for Rackspace Cloud Files. It provides a nice interactive interface to manage your Cloud Files from the Linux or Window machines using python. I had this code on github for quite some time now, but found out that not many people knew about it, so decided to write this quick post so that it will show up in the google searches and people can use it.
You can grab the code from here:
I know we can always use python API commands and curl to list things, but it might be helpful for customers who have no programming knowledge, and also they call the upload or download files directly from cron job, etc.
Current Implemented Functions:
delete container(empty containers)
container info – gives information about a container, like total size, publish status, CDN URL.
root@ssidhu-dv6:/python_cf# python cf.py Login with credentials from ~/.pycflogin file? [yes/no]yes -Info- Logging in as sandeepsidhu Interactive command line interface to cloud files CF>>list .CDN_ACCESS_LOGS cloudservers images sand CF>>list sand New folder/New Text Document.txt cloudfuse cloudfuse-0.1.tar.gz cloudfuse.tar.gz cloudfuse_1.0-1_amd64.deb file1 https_webpage_status.png CF>>quit root@ssidhu-dv6:/python_cf#
I have created it in a module way, so each of those functions can be called individually without actually running interpreter, so if somebody wants to just grab a list of files in a container, he can just call the individual function files. This will help if somebody wants to write their own scripts.
root@ssidhu-dv6:/python_cf# python cf_list_containers.py -Info- Logging in as sandeepsidhu .CDN_ACCESS_LOGS cloudservers images sand root@ssidhu-dv6:/python_cf# python cf_list_files.py sand -Info- Logging in as sandeepsidhu New folder/New Text Document.txt cloudfuse cloudfuse-0.1.tar.gz cloudfuse.tar.gz cloudfuse_1.0-1_amd64.deb file1 https_webpage_status.png root@ssidhu-dv6:/python_cf#
Some ideas about next features to add:
delete container along with it’s all files(non empty containers)
file info – will provide all the info about a file, size, CDN URL, meta data, etc
purge from CDN
upload multiple directories, handling of pseudo directories.
use of servicenet option
If anybody wants to add any of the above functions then please do and send me a pull request so that we can share it with other people.
This article will explain how to use the Rackspace Cloud Files API with php using the php-cloudfiles bindings. All this information is available online, some on Rackspace KB articles, github, API docs. But for somebody who is just now starting with cloud files, it’s not possible to gather all this information without wasting few days time here and there. So, the objective of this article is to provide :
- All the available links to Cloud Files documentation, API, and PHP bindings.
- PHP Cloud Files binding installation
- PHP Cloud Files binding coding examples
Cloud Files Documentation links:
- Cloud Files FAQ: for answer to questions like, what is cloud files, what you can and you cannot do with cloud files. http://www.rackspace.com/knowledge_center/index.php/Cloud_Files_FAQ
- API Introduction: API Introduction pdf file explains about Cloud Files product and it’s use cases, features. http://docs.rackspace.com/files/api/cf-intro-latest.pdf
- API Developer Guide: API Developer Guide lists RESTful API of cloud files, means how to query/store/retrive data from cloud files using a programming language, like php, asp, .NET, python, etc. http://docs.rackspace.com/files/api/cf-devguide-latest.pdf
In addition to all above Rackspace also have different language bindings which allows to interact with API easy. In this article we will concentrate on using php-cloudfiles language bindings.
PHP Cloud Files bindings on github.com https://github.com/rackspace/php-cloudfiles
PHP Cloud Files binding installation
Download a package from there using the Download link to your cloud server, and then extract it.
[root@web01 ~]# ls -la | grep cloudfiles -rw-r--r-- 1 root root 496476 Oct 1 16:09 rackspace-php-cloudfiles-v1.7.9-0-gb5e5481.zip [root@web01 ~]# [root@web01 ~]# unzip rackspace-php-cloudfiles-v1.7.9-0-gb5e5481.zip [root@web01 ~]# [root@web01 ~]# ls -la | grep cloudfiles drwxr-xr-x 6 root root 4096 Oct 1 16:36 rackspace-php-cloudfiles-b5e5481 -rw-r--r-- 1 root root 496476 Oct 1 16:09 rackspace-php-cloudfiles-v1.7.9-0-gb5e5481.zip [root@web01 ~]#
Once extracted, the folder should look something like this:
[root@web01 rackspace-php-cloudfiles-b5e5481]# ls AUTHORS cloudfiles_exceptions.php cloudfiles.php debian phpdoc.ini README tests Changelog cloudfiles_http.php COPYING docs phpunit.xml share [root@web01 rackspace-php-cloudfiles-b5e5481]#
Now we need to copy the cloudfiles API files to a place where they can be included in your php files, means source code files of API should be in PHP Include path.
As per the README file, following are the requirements for using API source code files:
Requirements ;; ------------------------------------------------------------------------ ;; [mandatory] PHP version 5.x (developed against 5.2.0) ;; [mandatory] PHP's cURL module ;; [mandatory] PHP enabled with mbstring (multi-byte string) support ;; [suggested] PEAR FileInfo module (for Content-Type detection) ;;
You can check all these with the following commands:
[root@web01 ~]# php -v PHP 5.1.6 (cli) (built: Nov 29 2010 16:47:46) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies [root@web01 ~]#
Let’s check if curl is installed:
[root@web01 ~]# php -m | grep curl curl [root@web01 ~]#
Now let’s check mbstring
[root@web01 ~]# php -m | grep mbstring [root@web01 ~]#
It looks like we don’t have mbstring installed, so let’s install it as well.
[root@web01 ~]# yum install php-mbstring
Similarly for FileInfo
[root@web01 ~]# php -m | grep file [root@web01 ~]# [root@web01 ~]# yum install php-pecl-Fileinfo [root@web01 ~]# php -m | grep fileinfo fileinfo [root@web01 ~]# [root@web01 ~]# cd rackspace-php-cloudfiles-b5e5481/ [root@web01 rackspace-php-cloudfiles-b5e5481]#
Remember you might have to change the folder name to reflect your own copy of the API
Now that we are in the source directory of php-cloudfiles, let’s move the required files to the right place. Run the following commands, and move them to /usr/share/php. This directory is part of the php include PATH.
[root@web01 rackspace-php-cloudfiles-b5e5481]# mkdir /usr/share/php [root@web01 rackspace-php-cloudfiles-b5e5481]# cp cloudfiles* /usr/share/php/
After moving the php-cloudfiles API binding files into the php include PATH, now lets check it to make sure we have everything is setup ok. We will create a simple php file and include the php-cloudfiles binding file into it.
[root@web01 rackspace-php-cloudfiles-b5e5481]# cd /var/www/html/ [root@web01 html]# touch cfcheck.php
Open your favorite text editor, e.g. vi editor, and type the following into cfcheck.php file:
<?php require('cloudfiles.php'); ?>
Save the file cfcheck.php and run this command from the prompt:
[root@web01 html]# php cfcheck.php [root@web01 html]#
Check the error state of the last command. It should be 0.
[root@web01 html]# echo $? 0 [root@web01 html]#
If you are returned to the prompt with no errors the PHP API is installed
If you get an error like this:
PHP Fatal error: require(): Failed opening required 'cloudfiles.php' (include_path='.;C:\php5\pear') in cfcheck.php on line 1
Then you do not have the files located in the right place. This error should print out the include_path so make sure you have the files located there. Or you can change the include path in the php.ini but that is beyond the scope of this tutorial.
PHP Cloud Files binding coding examples
Okay.. I hope by now you have API installed. If yes, then let’s move onto actually trying some php cloud files code. If you are having problem with installing the API, then probably you should troubleshoot the above first before moving on.
For an easy access to php-cloudfiles documentation, run the following commands:
[root@web01 html]# cd rackspace-php-cloudfiles-b5e5481/ [root@web01 rackspace-php-cloudfiles-b5e5481]# cp -R docs/ /var/www/html/ [root@web01 rackspace-php-cloudfiles-b5e5481]# service httpd start
After this you can access the documentation at http:///docs url.
If you are a developer, then you can easily check the documentation and work from there, but let me put some examples here for non-developer or newbies.
Basically, whenever you put
line in any of your php file, the following classes become available into your php code.
- CF_Authentication: Class for handling Cloud Files Authentication, call it’s authenticate() method to obtain authorized service urls and an authentication token.
- CF_Connection: Class for establishing connections to the Cloud Files storage system.
- CF_Container: Container operations
- CF_Object: Object operations
Each class comes with it’s own methods and properties. Methods can be called on the objects of those classes and their properties can be set and queried. For e.g. CF_Authentication class provides methods for authentication against the cloud files using the username and API key of your cloud files accounts. CF_Connection class provides a connection object. Connection object has methods like create_container, delete_container. Let’s look at the following code example.
Continuing on our previous example of cfcheck.php file, let’s add some more code into it.
[root@web01 html]# vi cfcheck.php
<?php require('cloudfiles.php'); $username='your cloud user name'; $api_key="your cloud api key"; $auth = new CF_Authentication($username, $api_key); $auth->authenticate(); if ( $auth->authenticated() ) echo "CF Authentication successful \n"; else echo "Authentication faile \n"; ?>
[root@web01 html]# php cfcheck.php CF Authentication successful [root@web01 html]#
if the above command runs without producing any error, then it means you have successfully authenticated against CF using the above php code. In the above code, first we created an object of CF_Authentication named auth by passing it username and API key as variables. Then we run authenticate() method of the CF_Authentication class which actually authenticates against the cloud files. Similarly, authenticated() method returns a boolean value (true or false) depending on the state of the auth object. I cannot be more explicit than this. if you are having problem understanding this, then you should probably not be doing this by yourself and get a developer, but if you still following so far and it’s working fine, let’s move forward… and add some more lines to our code.
<?php require('cloudfiles.php'); $username='your cloud user name'; $api_key="your cloud API key"; $auth = new CF_Authentication($username, $api_key); $auth->authenticate(); if ( $auth->authenticated() ) echo "CF Authentication successful \n"; else echo "Authentication faile \n"; $conn = new CF_Connection($auth); $container_list = $conn->list_containers(); print_r($container_list); ?>
[root@web01 html]# php cfcheck.php CF Authentication successful Array (  => .CDN_ACCESS_LOGS  => cloudservers  => images  => sand ) [root@web01 html]#
The output would be different depending on the containers you have in your cloud files and their names. Above are the name of the containers in my account. It might not list any containers if you don’t have one in your cloud files, but that’s okay. We will create container in the next lines. But as you can see in the code we added the following lines:
$conn = new CF_Connection($auth); $container_list = $conn->list_containers(); print_r($container_list);
we created a new CF_Connection object named conn and using this object, we got a list of all the containers with list_containers() method. list_containers() method gives us an array object in return containing the names of all the containers in your cloud files storage, in this examples I have used the print_r function to print the array in human readable format.
Well, I can add more code lines for creating container, deleting container, getting details of container like public URI, snet, etc., but that would be going too far and seems redundant as similar stuff is self explanatory in the API documentation.
But I do hope this article will give you a head start and helps you in finding the correct documentation and resources for your cloud files API and php coding. Do let me know how did you like it, and if there were any errors or typos, I would like to help back and improve it.
Every now and then you hit enter on your keyboard and next second you realize your mistake, you just deleted some files by mistake. Immediately your mind starts thinking about your backups, changes you have made since last available backup, and all that..
Well, today I faced the challenge of recovering some accidently deleted from from one of my cloud servers. As we all know data never actually gets deleted from the hard disk, it gets unlinked from the file system table and then those blocks get overwritten by other data. So, as soon as I realized that I have accidentally deleted the files, I booted the Cloud Server into rescue mode. I did this because then my original HDD of the cloud server is not mounted RW and is not in use. Then I downloaded my good old friend TestDisk, downloaded it and compiled, but bummer, it doesn’t recognize the HDD in cloud servers, don’t ask me why but it doesn’t may be it doesn’t have drivers for handling virtual hdds. So, what next??
With some googling I found this link http://carlo17.home.xs4all.nl/howto/undelete_ext3.html. I really recommend that you give it a good read, very well explained by the developer how the program actually works. The Developer of the ext3grep program wrote it to recover his own deleted files, as per his words “don’t expect it to work out-of-box”, and
Anyway, back to topic, I downloaded the code from http://code.google.com/p/ext3grep/ There were few other obstacles in the normal compiling and using it and finally recovering the deleted files. So, I’ll explain the steps and additional packages you will need to compile the ext3 and running it successfully.
Download the latest ext3grep code:
# wget http://ext3grep.googlecode.com/files/ext3grep-0.10.2.tar.gz
Extract the contents of the tar.gz file
# tar -xzf ext3grep-0.10.2.tar.gz
Install the following dependency packages, these are needed for compiling the ext3grep code.
# yum install gcc cpp # yum install gcc-c++ libstdc++ # yum install e2fsprogs-devel
Now go into the extracted ext3grep directory and run the following commands:
# cd ext3grep-0.10.2 # ./configure
./configure command should finish without any errors with the following lines in the bottom of the output.
[...] configure: creating ./config.status config.status: creating src/timestamp-sys.h config.status: sys.h is unchanged config.status: creating Makefile config.status: creating src/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands
If the ./configure command runs without any errors, that means all the required dependencies are met. But even if it shows some problem, then just look at the error and improvise yourself. Just check what it’s looking for and install those dependencies.
Next, please run the following commands
# make # make install
This will compile the ext3grep program, which you can use to recover the files.
Now, let’s talk about few things you need to know before starting to use the program. I have read through the long article and will try to spit out few main points about the program.
The program assumes a spike of recently deleted files (shortly before the last unmount). It does NOT deal with a corrupted file system, only with accidently deleted files.
Also, the program is beta: the development of the program was done as a hack, solely aimed at recovering developer’s own data. He fixed few of the bugs that didn’t show in his case, but over all the program isn’t as robust as it could be. Therefore, it is likely that things might not work entirely out-of-the-box for you. The program is stashed with asserts, which makes it likely that if something doesn’t work for you then the program will abort instead of trying to gracefully recover. In that case you will have to dig deeper, and finish the program yourself, so to say.
The program only needs read access to the file system with the deleted files: it does not attempt to recover the files. Instead, it allows you to make a copy of deleted files and writes those to a newly created directory tree in the current directory (which obviously should be a different file system). All paths are relative to the root of the partion, thus— if you are analysing a partition /dev/md5 which was mounted under /home, then /home is unknown to the partition and to the program and therefore not part of the path(s). Instead, a path will be something like for example “carlo/c++/foo.cc”, without leading slash. The parition root (/home in the example) is the empty string, not ‘/’.
Just one more thing before you attempt to recover your files, the ext3grep program expects to find lost+found directory in the file system, and since our /dev/sda1 is not mounted in the resuce mode, there is no lost+found directory, the workaround is to mount the /dev/sda1 and just create a new directory inside it named lost+found, and unmount it.
Now for recovering files, ext3grep provides many command line options, it allows you to look for the superblock and read the superblock information. It can show you which inode resides in which block group. It can even prints the whole contents of a single inode, like all blocks linked to that inode and other data about the inode. You can read more in the original article of ext3grep as mentioned in the starting of the article.
Enough about theory, let’s try some commands.
# $IMAGE=/dev/sda1 # ext3grep $IMAGE --superblock | grep 'size:' Block size: 4096 Fragment size: 4096
Using the options –ls –inode $N, ext3grep lists the contents of each directory block of inode N. For example, to list the root directory of a partition:
# ext3grep $IMAGE --ls --inode 2
It is also possible to apply filters to the output of –ls. An overview of the available filters is given in the output of the –help option:
# ext3grep $IMAGE --help [...] Filters: --group grp Only process group 'grp'. --directory Only process directory inodes. --after dtime Only entries deleted on or after 'dtime'. --before dtime Only entries deleted before 'dtime'. --deleted Only show/process deleted entries. --allocated Only show/process allocated inodes/blocks. --unallocated Only show/process unallocated inodes/blocks. --reallocated Do not suppress entries with reallocated inodes. Inodes are considered 'reallocated' if the entry is deleted but the inode is allocated, but also when the file type in the dir entry and the inode are different. --zeroed-inodes Do not suppress entries with zeroed inodes. Linked entries are always shown, regardless of this option. --depth depth Process directories recursively up till a depth of 'depth'. [...]
$ ext3grep $IMAGE --restore-file
the above command will create a RESTORED_FILES directory in your current directory and store the recovered files there.
Well, there are other lot of commands and options of manually recovering some small but important files, but I have to cut this article short as putting all that here would be duplicating the things. Above instructions should be good enough for recovering one or two directories or some important files. Do let me know if I have missed some important bits, I’ll update it.
This article is about setting up high-availability on CentOS using heartbeat software. We will have two web servers (named web1 and web2 in this article), both of these servers would have a shared IP (virtual IP) between them, this virtual IP would be active only at one web server at any time. So, it would be like an Active/Passive high-availability, where one web server would be active (have virtual IP) and other host would be in passive mode(waiting for first server to fail). All your web requests would be directed to the virtual IP address through DNS configuration. Both the server would have an heartbeat package installed and configured on them, this heartbeat service would be used by both the servers to check if the other box is active or have failed. So, let’s get on with it. I’m going to use rackspace cloud server to configure it.
Creating the cloud servers and shared IP:
Login into your Rackspace Cloud Control Panel at https://manage.rackspacecloud.com and create two CentOS 5.5 Cloud Servers. Choose the configuration which suites your resource requirement, give them descriptive names so that you can easily identify them for e.g. web1 and web2. Once you have your two Cloud Servers created, you will have to create a support ticket to get a shared IP for your cloud servers, as mentioned on this link cloudservers.rackspacecloud.com/index.php/Frequently_Asked_Questions#Can_I_buy_extra_IPs.3F
Installing heartbeat software:
Note: All the following commands are need to be run on both the cloud servers (e.g. web01 web02)
You will have to install heartbeat package to setup heartbeat between both the cloud servers for monitoring.
[root@ha01 /]# yum update [root@ha01 /]# yum install heartbeat-pils heartbeat-stonith heartbeat
Once all the above packages get installed, you can confirm them by running following command:
[root@ha01 /]# rpm -qa | grep heartbeat heartbeat-pils-2.1.3-3.el5.centos heartbeat-stonith-2.1.3-3.el5.centos heartbeat-2.1.3-3.el5.centos [root@ha01 /]#
First, we need to copy sample configuration files from the /usr/share/doc/heartbeat-2.1.3 directory to /etc/ha.d directory
[root@ha01 ha.d]# cd /usr/share/doc/heartbeat-2.1.3/ [root@ha01 heartbeat-2.1.3]# cp ha.cf authkeys haresources /etc/ha.d [root@ha01 heartbeat-2.1.3]# cd /etc/ha.d/ [root@ha01 ha.d]# ls authkeys ha.cf harc haresources rc.d README.config resource.d shellfuncs [root@ha01 ha.d]#
Next, we need to populate authkeys file with an MD5 sum key. You can generate the key with following command.
[root@ha01 ha.d]# dd if=/dev/urandom bs=512 count=1 2>/dev/null | openssl md5 ea6cdc1133c424e432aed155dd48a49d
Now we need to enter the key into “authkeys” file, so it looks like following.
[root@ha01 ha.d]# cat authkeys auth 1 1 md5 a77030a32d0cc2b6cac31f9cddfe4b09
Next, we need to configure ha.cf and add/update the following parameters in it with appropriate values: You will need to change the hostname as per your cloud server host names for node parameters.
debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 10 udpport 694 bcast eth1 ucast eth1 auto_failback on node web01 node web02
debugfile /var/log/ha-debug logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 10 udpport 694 bcast eth1 ucast eth1 <private IP address of web01> auto_failback on node web01 node web02
Next, we need to configure haresources and add resources into it. The haresources file contains a list of resources that move from machine to machine as nodes go down and come up in the cluster. Do not include any fixed IP addresses in this file.
Note: The haresources file MUST BE IDENTICAL on all nodes of the cluster.
The node names listed in front of the resource group information is the name of the preferred node to run the service. It is not necessarily the name of the current machine. Like in below example, I have chosen web01 as the preferred node to run the HTTPD service, but if web01 is not available then the httpd service will be started on web02. Should the service move back to web01 again once it becomes available is controlled by the auto_failback ON configuration in ha.cf file
So, add the following line into haresrouces file on both the servers.
web01 <shared IP address>/24/eth0 httpd
Starting the heartbeat service:
Now, let’s start the heartbeat service on both the nodes using following command
[root@web01 /# chkconfig heartbeat on [root@web01 /# service heartbeat start Starting High-Availability services: 2011/04/27_08:16:04 INFO: Resource is stopped [ OK ]
Now if you check httpd service status, it should be running. And your shared IP address should be up on the web01 node.
[root@web01 ~]# service httpd status httpd (pid 23938) is running...
ifconfig -a command will show you all the available IP address. By running ifconfig -a command on the web01 you can confirm that it has the virtual IP address up and accessible on it.
Testing the failover using heartbeat service:
Let’s test the high availability. Shutdown the web01 node using halt command. The virtual IP address and httpd service should automatically be failed over to web02.
- Sandeep Sidhu
This is starting page for high availability and load balancing articles I’m going to publish next. Now, there are many ways to achieve high availability (HA) for your website in cloud servers. Let me outline all of them to give you a clear picture so that you can choose whichever method best suits your requirements.
1. High Availability
Example: You will have two webservers (master and slave). High availability is setup using heartbeat in Active/Standby mode. There is a virtual shared IP address between them, which IP address will be active on one of them at a time.
In this setup, you will have to keep your database on a separate server. Master and slave servers should be able to communicate with database server independently without any dependency on each other.
Here is the link to article explaining configuration of basic Active/Passive high availability using heartbeat.
2. Load balancing —– Still in writing…..
Example: You will have one load balancer server. You will have Load balancing service running on load balancer, listen for http traffic and will internally forward http traffic to a number of web servers. So, you can have N+1 web servers behind this load balancer. The Load balancing service will divide the load between your web servers.
3. Load balancing + High Availability —– Still in writing…..
Example: You will have two load balancer servers (master and slave). There is a virtual shared IP address between them, which IP address will be active on one of them at a time. You will have Load balancing service will listen on this shared IP on load balancers and will internally forward http traffic to a number of web servers. So, you can have N+1 web servers. The Load balancing service will divide the load between your web servers.
This article is about setting up mysql master master replication between two cloud servers. The operating system which I’m going to use is Debian 5 (Lenny) Rackspace cloud base image.
We will have two cloud servers, named debian501 and debian502 during this exercise. Both servers have two IP addresses(one public, one private). We will configure the replication to be done using the private IP interface so that we don’t incur any bandwidth charges.
Creating the cloud servers:
You need to create two Linux cloud servers, using the Debian 5 image. You can create them by logging into your cloud control panel, and spinning up two cloud servers. Choose the RAM configuration depending on your requirement of the database. Name the server accordingly so that you can easily identify. For my exercise I have named them debian501 and debian502. All the below command I’m running as a root user.
We need to install mysql on both the debian cloud servers. Before installing MySQL we need to run few extra commands needed for any freshly installed Debian.
Update the package database:
#aptitude install locales #dpkg-reconfigure locales
The dpkg-reconfigure locales command will bring up a locales setting window where you can choose the locales for your system depending on your country and region. In my case I have choose “en_GB.UTF-8″.
Now, you can run the following commands to get the MySQL installed:
#aptitude install mysql-server mysql-client libmysqlclient15-dev
Enabling the replication
After this we need to make configuration changes into each of the server to enable replication.
First on debian501 server
We need to create our database which we will setup for replication, and also we need to create a replication username and password. Run the following commands to set it up. Do change all the values as per your needs.
#mysql -u root -p --> Login to your mysql with password you setup during mysql installation.<br /> mysql>
Open the file /etc/mysql/my.cnf and create/update following entries:
bind-address = 0.0.0.0 server-id = 1 log-bin = /usr/local/mysql/var/bin.log log-slave-updates log-bin-index = /usr/local/mysql/var/log-bin.index log-error = /usr/local/mysql/var/error.log relay-log = /usr/local/mysql/var/relay.log relay-log-info-file = /usr/local/mysql/var/relay-log.info relay-log-index = /usr/local/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 1 master-host = [private IP address of debian502] master-user = [replication username] master-password = [replication password replicate-do-db = [database name to be replicated]
Second on debian502 server
Open the file /etc/mysql/my.cnf and create/update following entries:
bind-address = 0.0.0.0 server-id = 2 log-bin = /usr/local/mysql/var/bin.log log-slave-updates log-bin-index = /usr/local/mysql/var/log-bin.index log-error = /usr/local/mysql/var/error.log relay-log = /usr/local/mysql/var/relay.log relay-log-info-file = /usr/local/mysql/var/relay-log.info relay-log-index = /usr/local/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 2 master-host = [ private IP address of debian501 ] master-user = [ replication username ] master-password = [ replication password ] replicate-do-db = [ database name to be replicated ]
Now, restart both databases. If the service restart on either server fails, then please check the /var/log/mysql/error.log file for any errors, and update the configuration checking for any typos, etc.,
Testing the scenarios:
For the purpose of testing our replication setup, we can create the database specified in the configuration section above (replicate-do-db), as well as a test table on one of the nodes and watch the log files in /var/log/mysql directory. Note that any and all the database changes should be replicated to our other server immediately.
mysql> create database [your-db-name]; mysql> use [your-db-name] mysql> create table foo (id int not null, username varchar(30) not null); mysql> insert into foo values (1, 'bar');
An additional test is to stop the MySQL service on debian502, making database changes on the debian501 and then starting the service on debian502 once again. The debian502 MySQL service should sync up all the new changes automatically.
you should also consider changing the default binary log rotation values (expire_logs_days and max_binlog_size) in the /etc/mysql/my.cnf file, as by default all the binary logs will be kept for 10 days. If you have high transaction count in you database and application then it can cause signifient hard disk space usage in logs. So, I would recommend changing those values as per your server backup policies. For example, if you have daily backups setup of your MySQL node then it makes no sense to keep 10 days worth of binary logs.
This article is about setting up automatic MySQL server backup from your Windows Cloud Server to your Rackspace Cloud Files container. There are mainly following three steps to set it up.
1. Download and build a small piece of software, which will be used to upload a file to Cloud Files.
2. Create a .bat file which will take the backup of MySQL server database and upload database backup file to Cloud Files using the software we create in step 1.
3. Create a scheduled task in Windows to run the above .bat file we created in step 2 at scheduled date and time.
1. Download and build the upload software:
We need to download a small piece of C# software and compile it on Windows Cloud Server. There are following easy steps to do this.
- Download the software from this link.
- After downloading extract the software into your C: drive, e.g. c:/upload-to-cf-cs-0.1/
- Now download and install nant from http://nant.sourceforge.net/.
- Once you have successfully installed nant, then open a command prompt go to c:/upload-to-cf-cs-0.1 and run nant.exe command in it like shown in following screenshot.
The above command will compile a binary file upload-to-cf.exe. We will be using this binary file to upload our backup files to Cloud Files. You can test this command using the following syntax:
upload-to-cf.exe [cloud username] [cloud api_key] [container name] [file path]
2. Create a .bat file for taking backups:
Next, we need to create a .bat file to automate the MySQL database backup procedure. We want to backup the MySQL database backup file to a date-stamped file. There are several ways to do this, but the easiest way is through a batch file. The command we’ll be using is following:
c:\\bin\mysqldump -u[user] -p[password] --result-file="c:\\backup.%DATE:~0,3%.sql" [database name]
Replace the values for username, password, and database name with your MySQL information. You can try this command on a normal command prompt, and make sure that it works without any problem before actually using it in an .bat file. The above command should end up with .sql file containing a full dump of the MySQL database, the name you provided at the end of the above command.
Once you are satisfied that the above command works without any problem, then you can go ahead and create a .bat backup file (e.g., mysql-backup.bat). It can look like this…
Note: You need to replace the values in  brackets with actual values for your system.
@echo off echo “Running dump... “ c:\\bin\mysqldump -u[user] -p[password] --result-file="c:\\backup.%DATE:~0,3%.sql" [database name] echo “MySQL dump done!” echo “Uploading to cloud files” upload-to-cf.exe [cloud username] [cloud api_key] [container name] “c:\\backup.%DATE:~0,3%.sql” echo “backup file uploaded to cloud files!”
Now you should be able to run mysql-backup.bat from a command prompt. If it works, you should see a success message and a file should be generated in the directory you specified. If not, make sure you’ve got the correct username, password, and database and try it again.
3. Fully automate, scheduled task:
So we have got out mysql-backup.bat file working, which creates the backup of our MySQL database and uploads it to Rackspace Cloud Files using the upload-to-cf.exe. The next step is to create a Windows Scheduled Task so that all the above steps get done automatically at a scheduled date and time without any human intervention. Follow the below steps to setup a scheduled task in windows.
a) Click Start, click Run, type cmd, and then click OK.
b) At the command prompt, type net start, and then press ENTER to display a list of currently running services. If Task Scheduler is not displayed in the list, type net start “task scheduler”, and then press ENTER.
c) At the command prompt, type schtasks /create /tn “MySQL_Backup” /tr c:\apps\mysql-backup.bat /sc Value /st HH:MM:SS /ed MM/DD/YYYY, and then press ENTER. Note that you may have to change the parameters for your situation. For example, you might type schtasks /create /tn “MySQL_backup” /tr c:\apps\mysql-backup.bat /sc daily /st 08:00:00 /ed 12/31/2014.
This above example schedules the MySQL_Backup program to run once a day, every day, at 8:00 A.M. until December 31, 2014. Because it omits the /mo parameter, the default interval of 1 is used to run the command every day.
During the task creation you will be asked your currently logged in user’s password. Once you enter the password a new task will be created for you. You can view the newly created task with schtasks command.