Home > cloud servers, Linux > Mounting Rackspace Cloud Files using cloudfuse into ubuntu 10.10 v2

Mounting Rackspace Cloud Files using cloudfuse into ubuntu 10.10 v2

This article shows how to mount cloud files using cloudfuse software into your ubuntu 10.10 as a directory so you can access your cloud files containers data inside your linux server just like any other folder and files. One heads up, this gives you an easy access to your cloud files data but in no way means you can use it as a place for any database/application directly running from it, will be darn slow. So, why would one want it then? Well, there are plenty of uses of having system level access to your cloud files, for instance if you have some scripts which create your mysql backup or website backup, those scripts can create backups automatically into cloud files without you needing to copy them yourself. So, lets get on with it..

Note: The following commands are tried and tested on Ubuntu 10.10, but they should be easily applicable to other versions of ubuntu or debian. As long as you install the required packages, you should be able to compile cloudfuse code and use it.

Installing cloud fuse:
First download the cloudfuse code from cloudfuse-0.1.tar.gz. I downloaded this code from upsteam maintainer and shared it here so that all the below steps work fine, as the new code changes in repository can cause compile time errors. So, download the code from above link . Extract this file and then compile.

ssidhu@ssidhu:~$ tar -xzvf cloudfuse-0.1.tar.gz

Once you have extracted the .tar.gz file, you should have following files under cloudfuse-0.1 directory.

root@ubuntu-test:~/cloudfuse-0.1# ls -la
total 280
drwxr-xr-x 3 root root   4096 Feb 21 21:47 .
drwx------ 4 root root   4096 Feb 21 21:47 ..
drwxr-xr-x 8 root root   4096 Feb 21 21:47 .git
-rw-r--r-- 1 root root   1059 Feb 21 21:47 LICENSE
-rw-r--r-- 1 root root   1024 Feb 21 21:47 Makefile.in
-rw-r--r-- 1 root root   2332 Feb 21 21:47 README
-rw-r--r-- 1 root root  12014 Feb 21 21:47 cloudfsapi.c
-rw-r--r-- 1 root root   1043 Feb 21 21:47 cloudfsapi.h
-rw-r--r-- 1 root root  11240 Feb 21 21:47 cloudfuse.c
-rw-r--r-- 1 root root   4335 Feb 21 21:47 config.h.in
-rwxr-xr-x 1 root root 198521 Feb 21 21:47 configure
-rw-r--r-- 1 root root   1324 Feb 21 21:47 configure.in
-rwxr-xr-x 1 root root  13184 Feb 21 21:47 install-sh
root@ubuntu-test:~/cloudfuse-0.1#

Now its time to compile it and install it. You’ll need libcurl, libfuse, and libxml2 and their dev packages installed to build it.

Cloudfuse is built and installed like any other autoconf-configured code. Normally,

./configure
make
sudo make install

But, first you need to install the required packages, otherwise the ./configure command will fail and throw you errors.

apt-get update
apt-get install gcc
apt-get install libcurl4-openssl-dev
apt-get install libxml2 libxml2-dev
apt-get install libfuse-dev

now run the following command in the cloudfuse directory

root@ubuntu-test:~/cloudfuse-0.1# ./configure
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for a BSD-compatible install... /usr/bin/install -c
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for pkg-config... /usr/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for XML... yes
checking for CURL... yes
checking for FUSE... yes
.........
............
configure: creating ./config.status
config.status: creating Makefile
config.status: creating config.h
root@ubuntu-test:~/cloudfuse-0.1# make
gcc -g -O2 -I/usr/include/libxml2     -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse   -o cloudfuse cloudfsapi.c cloudfuse.c -lxml2   -lcurl   -pthread -lfuse -lrt -ldl
root@ubuntu-test:~/cloudfuse-0.1# make install
/usr/bin/install -c cloudfuse /usr/local/bin/cloudfuse
root@ubuntu-test:~/cloudfuse-0.1#

If everything went fine, you should have cloudfuse installed properly. Confirm this by running the which command. It should show the location of the cloudfuse binary file.

root@ubuntu-test:~/cloudfuse-0.1# which cloudfuse
/usr/local/bin/cloudfuse
root@ubuntu-test:~/cloudfuse-0.1#

Mounting cloudfiles:
Let’s now use cloudfuse and mount our cloudfiles.

You’ll have to create a configuration file for cloudfuse in your home directory and put your Rackspace cloudfiles username and API key in it, like below:

$HOME/.cloudfuse
    username=[username]
    api_key=[api key]
    authurl=[auth URL]

Auth URLs:
US cloudfiles account: https://auth.api.rackspacecloud.com/v1.0
UK cloudfiles account: https://lon.auth.api.rackspacecloud.com/v1.0

The following entries are optional, you can define these values in the .cloudfuse file.

     use_snet=[True to use snet for connections]
     cache_timeout=[seconds for directory caching, default 600]

After creating the above configuration file, you will run the cloudfuse command like following. The syntax should be as simple as:

cloudfuse [mount point]

So, you should be able to mount cloud like this

root@ubuntu-test:/# mkdir cloudfiles
root@ubuntu-test:/# cloudfuse /cloudfiles

If you run # ls -la command inside the /cloudfiles directory you should see your cloudfiles containers.

If you are not the root of the system, then you username will need to be part of “fuse” group. This can probably be accomplished with:

sudo usermod -a -G fuse [username]

If you are unable to see any containers inside the mountpoint, then probably some of the above steps didn’t work properly. You need to check and make sure that all the above steps get completed properly.

UPDATE: 30/9/2011

Here is some extra info for CentOS on how to mount cloudfuse using another use ie. Apache.

$ yum install fuse
$ usermod -a -G fuse apache
$ mkdir /mnt/cloudfiles
$ chown apache:apache /mnt/cloudfiles
$ sudo -u apache cloudfuse /mnt/cloudfiles -o username=myuser,api_key=mykey,use_snet=true,authurl="https://lon.auth.api.rackspacecloud.com/v1.0"

Play around with it and fix it how you like, but I think it would be useful. Courtesy my frnd Anh.

Let me know if there are any errors in these instructions or you faced some difficulty understanding them, I will update them accordingly. Any comments are highly appreciated.

Note: Please don’t bug Rackspace Support for the help on this article, it’s not supported by them hence this article. :)

Good luck!

About these ads
Categories: cloud servers, Linux
  1. mandm
    March 9, 2011 at 12:07 am

    yes this worked perfectly fine for me, i used the authurl for US cloudfiles and now i can mount my cloud files on the server
    Thanks a lot sandeep for helping me out

  2. Flo
    March 11, 2011 at 12:31 pm

    Hi Sandeep, thanks for your previous replies. Sorry have not had time to investigate yet, but saw you added Auth URLs. It’s looking promising. Many thanks, F.

  3. April 7, 2011 at 4:40 pm

    Sandeep, thanks so much. I’ve been trying to map files to my Cloud Server for a month! Just so you know, I received a link to this article from Rackspace support — they like this a lot.

    • April 14, 2011 at 10:55 pm

      Hi Trenton,

      I’m glad that it worked for you, thanks for the positive feedback! :)

      -Sandeep

  4. Hamid
    April 11, 2011 at 4:29 pm

    Hi Sandeep,

    I have mounted the cloud file to cloud server(Linux) but I acnnot copy files and folders from the cloud server to cloud file.

    I got follwoing error messages:

    Operation not permitted

    Thanks

    Hamid

  5. mandm
    April 14, 2011 at 8:31 pm

    Hi Sandeep, so i build a new server and tried all the steps mentioned in here and i get this error, so i guess this needs to be added to document??

    $ sudo mount /media/cloudfiles
    mount: can’t find /media/cloudfiles in /etc/fstab or /etc/mtab

    Thanks once again for being proactive and putting up this article

    • April 14, 2011 at 10:55 pm

      Hi Mandm,

      You are running the wrong command to mount the cloudfiles. do following as mentioned in the article. Cloud files cannot be mounted using normal mount command, you need to run the “cloudfuse” command as following:

      mkdir /cloudfiles
      cloudfuse /cloudfiles

      hope it works for you.

      Thanks,
      Sandeep

  6. mandm
    April 15, 2011 at 4:07 am

    Hi Sandeep,
    Thanks for pointing out, that works like a charm. So may be that might have been the reason for the segfault issues i had where the servers would drop the mount point and also that there was a lag between both the servers seeing files uploaded from the other one…..
    i’ll keep you posted on what i see

    • mandm
      April 15, 2011 at 4:30 am

      actually sandeep let me take that back, i might have mistyped the issue earlier, i use this command to load my cloud files, since i wanted to read and write to the files

      sudo cloudfuse /media/cloudfiles -o allow_other,nonempty

      so it still drops the mount point periodically and there is a lag when one server sees the files uploaded from other server by about 5 mins, any ideas?

  7. mandm
    April 15, 2011 at 6:56 pm

    hi Sandeep,
    So even on the new server it gives me this error
    kernel: [87106.412129] cloudfuse[19158]: segfault at 1b1 ip 00007f5d78492827 sp 00007f5d73e10558 error 4 in libxml2.so.2.7.6[7f5d783e8000+146000]

    So what verion of libxml2 so you have?
    mine is

    $ sudo dpkg –list libxml2
    Version Description
    +++-=====================-======================-==================================
    ii libxml2 2.7.6.dfsg-1ubuntu1.1 GNOME XML library

  8. Jody Durkacs
    April 20, 2011 at 5:47 pm

    This works great for me, but I can not get it to allow anyone but root to write to this location. Is there a way to mount it read-write for a non-root user? The mount command claims it is mounted rw, but it won’t let other users cd to the directory or copy files there.

  9. mandm
    April 22, 2011 at 7:51 pm

    hi Jody, you should use this command

    sudo cloudfuse /media/cloudfiles -o allow_other,nonempty

    this will help you read write files other than the root user, even thouugh the file ownership and group ownership shows as root root, it is stil accessible via apache user..etc

  10. Jody Durkacs
    April 27, 2011 at 7:19 pm

    That worked, thank you. Now I just need to get automount with /etc/fstab working. It won’t auto-mount at boot. If anyone can help, I am using the following line in that file:
    cloudfuse /home/user/cloud fuse defaults,allow_other,gid=1002,umask=007,username=cloudusername,api_key=XXXXXXXXX 0 0

    Also, if I have that in fstab and I try mount -a I get this output:
    mount: wrong fs type, bad option, bad superblock on cloudfuse,
    missing codepage or helper program, or other error
    (for several filesystems (e.g. nfs, cifs) you might
    need a /sbin/mount. helper program)
    In some cases useful info is found in syslog – try
    dmesg | tail or so

    I think a clue is that after a reboot, if I manually try to mount it, I initially get this message:
    fuse: device not found, try ‘modprobe fuse’ first

    and if I enter ‘modprobe fuse’ and then try to manually mount, it works.

    Can anyone tell me what I need to do to make this work with fstab?

  11. mandm
    April 27, 2011 at 10:00 pm

    Yeah…same issues that i faced…:)

    so i don’t use the /etc/fstab, for mounting the cloudfiles on server reboot try this

    add these entries to the /etc/crontab

    @reboot root modprobe fuse
    @reboot root cloudfuse /media/cloudfiles -o allow_other,nonempty

    and try restarting the server, you should see the cloudfiles mounted

    • April 27, 2011 at 10:43 pm

      thanks mandm, this is really great and makes me so much happy, we all helping each other, that’s what open source is all about.. :) I’ll go ahead add these instructions to the post as well. once again thanks for helping out Jody.

    • bipyjamas
      June 13, 2011 at 1:47 pm

      I noticed a oddity with cron under CentOS not setting the $HOME variable for the executing user, so the cloudfiles account never gets mounted as it can’t find the $HOME/.cloudfuse file. To get around this we can explicitly specify the $HOME variable to use at runtime,

      @reboot root HOME=/root cloudfuse /media/cloudfiles -o allow_other,nonempty

      Alternative, it is also possible to specify all mount options in the crontab.

  12. mandm
    April 28, 2011 at 5:09 am

    Not a problem Sandeep, i am just trying to share as i learn..:)

  13. Jody Durkacs
    April 28, 2011 at 2:31 pm

    That worked perfectly! Thanks mandm!

  14. mandm
    May 12, 2011 at 8:35 pm

    Hi Sandeep,
    is there a way to get around the 10000 files limit per container ?

    I have hit the limit and the wierd part is that any file or folder created by my application gets deleted if it has letters greater than ‘f’ since i have 70K files which starts with ‘f’

  15. Hal Burgiss
    May 20, 2011 at 5:13 pm

    I have twice tried to use this to sync files as a backup mechanism, and twice it (or something related) has corrupted the local filesystem.

    # ll /mnt
    ls: cannot access /mnt/cloud: Transport endpoint is not connected
    total 8.0K
    drwxr-xr-x 4 root root 4.0K 2011-05-18 10:23 ./
    drwxr-xr-x 24 root root 4.0K 2011-05-16 13:54 ../
    d????????? ? ? ? ? ? cloud/
    drwxr-xr-x 2 root root 0 1969-12-31 19:00 rackspace/

    The rackspace mount point does not look too screwy, but any attempt to access it results in a hung terminal. The cloud one is obviously fubar.

    • mandm
      June 23, 2011 at 2:45 pm

      Yes i agree it is a complete fubar in terms of reliability and scalability. You can only use this in a non productive and an environment where the users have no interaction.
      I have a cronjob running every minute to get around this error and once the jobs sees that the transport endpoint is not connected it umounts and remounts the drive

      But yes, there is no point in using it for scalable and highly visible environments

      (Using it since Jan 2011)

      • Hal Burgiss
        June 23, 2011 at 4:06 pm

        I am having better results just mounting on demand. I am using this for backups and this is done nightly. So I unmount as soon as the backup is completed. That is helping. I have also identified some other “issues” that have caused reliability problems. Copying 2G+ files (at least from a 32bit system) will cause a hang. The same files move easily between local Linux 32bit systems, and to Cloud Servers. And in fact with other tools to Cloud files. I have also found 2 files with very strange character encoding in the filenames, that would cause a dropped connection. There may be some rsync options too that don’t play nice.

  16. Nandeesh
    June 21, 2011 at 7:07 am

    Hi Sandeep,

    Good Work,

    I am new to Linux. trying hard to work with this.. can you guide me few steps to learn Linux as early as possible.

  17. Brad
    July 10, 2011 at 5:43 pm

    Hi – when I try to mount, I get the error:

    root@host [/]# cloudfuse /cloudfiles
    Unable to authenticate.

    I’ve verified the API and username with Rackspace and all are correct. Any idea what could be wrong?

    • July 10, 2011 at 5:48 pm

      Brad, make sure you are using correct API AUTH URL, US and UK cloud files accounts has different URLs. Also, are you using the code which I have shared in the blog or you checked out directly from github? if you checked out directly from github something might have changed in the code. The source code which I have shared with the link in the article work fine with all the instructions, so it might be AUTH URL or make sure your linux has ca-bunble certs installed, test the authentication with cURL first as well.

      good luck!
      Sandeep

  18. Shane T
    July 13, 2011 at 6:51 pm

    Hi Hal… What are you using to get around that “Transport endpoint…” error to unmount the drive? I have the same error and wanted to take your recommendations to mount/unmount on demand but I don’t see any unmount options within cloudfuse?

    -st

  19. Will Sinclair
    July 19, 2011 at 4:30 pm

    Sandeep,

    I am also having the

    “Unable to authenticate.
    Bad username or password” error

    I have followed your instructions meticulously, and triple checked everything is as you say it should be.

    How do I check if my server has ca-bunble?

    • Will Sinclair
      July 19, 2011 at 5:12 pm

      I worked out the issue, in cloudfuse.c you are hardcoding the apiurl option and ignoring the one pulled from ~/.cloudfuse … I am auth’ing against the london server.

      Cheers,
      W

  20. Will Sinclair
    July 19, 2011 at 5:29 pm

    Well i can mount fine, but when trying to access the mountpoint it just hangs, any ideas?

    • Jody Durkacs
      July 19, 2011 at 6:55 pm

      Something similar happens to me. Occasionally my cloudfuse mount will just hang if I try to access it from a shell. If I umount it, I get an error stating it’s busy. This has happened twice in the last couple months. A reboot of the system solved it.

    • July 19, 2011 at 7:02 pm

      Hi Will,

      check with curl, make sure you can authenticate to the AUTH_URL with https and your curl shouldn’t throw any error. If the curl works fine that means ca-bundle is working fine.

      http://www.productionscale.com/home/2009/8/2/using-curl-to-access-the-rackspace-cloud-api.html#axzz1SZyo778r

      and regarding AUTH_URL, did you check out the new code from github or used the one which I’m sharing in this post, the code which I’m sharing works fine with all the instructions on this page and it does check the .cloudfuse configuration and use the AUTH_URL defined in .cloudfuse file.

      Thanks,
      Sandeep

  21. Will Sinclair
    July 19, 2011 at 7:16 pm

    Yeh, i checked with CURL and it auth’d fine. I managed to get it to work after recompiling with the london api server hardcoded instead of the default US one (didn;t have the time to debug and fix properly sorry)

    Well work is not entirely right; “cloudfuse /mountpoint” works, but then when doing any type of IO on “/mountpoint” causes the system to hang.

    Have tried rebooting a couple of times but no luck. Any ideas?

    uname -a:
    Linux 123.456.789.000 2.6.35.4-rscloud #8 SMP Mon Sep 20 15:54:33 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

    • July 19, 2011 at 7:39 pm

      good that it authenticated, although me and many didn’t had to compile it with auth_url hardcoded, but I guess whatever works ;)

      regarding hanging, what steps are you doing, are you trying to copy/download a large file, bcoz it just hangs until the file download and upload finishes. don’t now exactly what could be causing it, try doing the strace on the process and see which files it’s trying to open, etc., or may be it’s problem with cloud files, the other day I was doing a large files download using swift tool and it would just hands randomly, ended up doing it with cyberduck…

      also, you can try downloading the fresh code from https://github.com/redbo/cloudfuse and compiling that, there might be any improvements. I also need to download the new code and update this whole article and share new .tar.gz but couldn’t find the time lately

      Hope it works for you, happy debugging ;)

      Cheers!
      Sandeep

  22. December 1, 2011 at 4:27 am

    Thank you so much for posting this. In conjunction with other research I used a variation of your guide to compile the setup I am using today. http://noconformity.co/2011/11/30/rackspace-cloud-ubuntu-server-using-cloud-files/ Added few things like auto-mount and a direct pull from github.

    Happy Holidays.

  23. Marcel
    February 22, 2012 at 5:28 pm

    Anybody have any ideas on why cloudfused containers only report back 10,000 items in a bucket? I am hoping to use cloudfuse to store lots (upto half a million) website images statically, but will need to do things like list the images. So far everything else works well using advice from http://derekarnold.tumblr.com/post/526900310/using-cloudfuse-to-mount-the-rackspace-cloud-files , and of course Sandeep.

    Surely cloudfuse uses the API and bypasses the usual restrictions on reported files inside a container? (is there an option when mounting to open the reported file quantity limits?)

    Many thanks

  24. April 4, 2012 at 5:33 am

    I am having a little trouble with mounting my CloudFiles container in my CentOS 6 cloud server. Installing cloudfuse completed without any errors, mounting the storage works without any errors too. If I do an “ls” on the mount point I get the error

    ls: reading directory /mnt/cloudfiles/: Link has been severed

    Subsequent “ls” commands just freeze the console session. Something is amiss, any ideas?

    • April 22, 2012 at 8:36 pm

      Hi ausip,

      Did u manage to solve the problem or still having issues with it?. To be frank I don’t have a straight answer about why it’s behaving that way. It could be that fuse driver is not loaded or it’s failing to connect to your cloud files account in which case just check that you are using correct username and API. If still no luck, then try re-building the whole thing using different revision of the code from github. Let me know how it goes.


      Sandeep

  25. Geo
    May 8, 2012 at 2:58 pm

    We are using cloudserver and a cloud file is mounted to our server as /mnt/cloudfiles using cloudfuse. Now we encrypted this directory /mnt/cloudfile as follows:

    encfs /mnt/cloudfiles /mnt/data -o allow_other -o nonempty

    Now while copying a text file from our local machine to /mnt/data we are able to the text file. Now while copying this file to server home directory file is get corrupted.

    So please let me know with out file corruption is it possible to copy data from visible mount point to another location so that the file is in unencrypted form. Please update us is the issue with encfs.

    We are using encfs version 1.7.4 and also we tried in encfs version 1.0.

  26. RC
    May 8, 2012 at 7:25 pm

    I am a newbie to using EncFS and I am running into corrupted files on the EncFS mounted Cloudfiles on Cloud server.

    Here are the basic details of the system I am using
    What version of the product are you using? On what operating system?
    EncFS 1.7.4 with openSSL 1.0.0e 6 Sep 2011 and openssl version:0.9.8m on Ubuntu 10.04 LTS (Lucid)
    (Initially we tried with EncFS 1.5 and openssl version 0.9.8k and decided to upgrade to latest version)

    How are the cloudfiles mounted?
    We are using cloudserver and a cloud file is mounted to our server as /mnt/cloudfiles using cloudfuse.
    Encrypted this directory /mnt/cloudfile as follows:
    encfs /mnt/cloudfiles /mnt/data -o allow_other -o nonempty

    A simple test that we see failing and resulting in corrupted files is:
    Upload a *.csv file to the /mnt/data — using CyberDuck and FTP from a laptop
    vi x.csv — there are binary characters injected into the file

    Any suggestion on what we missed and how to stop the files from corrupting?

  27. 007g3m1n1
    May 14, 2012 at 3:28 am

    Hi Sandeep and ausip,

    I am getting the same “Link has been severed” error on ls when the setting use_snet=true is in place inside the .cloudfuse config file.

    when that setting is removed, all works well…

    Gemini

  28. May 14, 2012 at 3:34 am

    Hi Sandeep and ausip,

    I get the same “Link has been severed” error when doing an ls only when the setting use_snet=true is in my .cloudfuse config file.

    current workaround is to remove that setting

  29. Vipul Agarwal
    May 15, 2012 at 1:40 pm

    Hi guys,

    Just an quick update.
    I tired removing use_snet=true from the configuration but no joy.

    The console freezes as soon as I try to access the mounted directory. I’m running CentOS 6.

    Don’t forget to unmount the remote cloud files directory.
    Re-login to the server, kill the cloudfuse process and run,

    fusermount -u /mnt/cloudfuse

    I’ll try to install the older version of cURL to check if that helps. I really need to use cloud files instead of Amazon S3 to keep offsite backups.

    Regards,
    Vipul

  30. Rob Norman
    June 15, 2012 at 1:45 pm

    Hi Guys,

    I have been having the exact same problem on both redhat and centos.
    Could it possibly be that the use_snet=true option is somehow not configured for the london cloud and somehow trying to access the wrong snet setting?

  31. Rob Norman
    June 15, 2012 at 1:47 pm

    I should have also mententions the RHEL 5 was working fine until I added the use_snet option. Removing this has put things back to normal. RHEL 6 and centos cloud servers I cant get to work at all.

    • June 16, 2012 at 6:48 am

      Hi Rob,

      I’ll try to test it out as well by creating RHEL5 box in UK later during the day. Did you try to test the ping to snet storage URL? If many people are having issue that might be something changed which cloudfuse is not taking care of.

      For the meanwhile, you can try the following trick as well and see if it works.
      get the IP of snet storage URL by pinging it and setup local /etc/hosts entry to point the actual storage URL to resolve to snet IP and see if cloudfuse works. you can do a tcpdump to see if it’s connecting using snet IP.


      Regards
      Sandeep

  32. Rob Norman
    June 16, 2012 at 10:32 am

    Good idea Sandeep.

    I spoke with Rackspace support yesterday and we ran a few tests using curl rather than the cloudfuse and confirmed that there were no firewall/routing issues so I can only think its a use_snet configuration setting.

    I will report back once I have tried setting the dns in /ets/hosts

  33. June 16, 2012 at 4:57 pm

    Hi Rob,

    I just tried the whole thing by doing a fresh build of cloudfuse on RedHat 6. I hit “Link has been severed” issue https://github.com/redbo/cloudfuse/issues/38 which seems to be also effecting CentOS 6 as well.

    so, as a workaround for testing, I used -f flag while mounting which provided me the verbose output as well.

    I tested “use_snet=True” parameter in .cloudfuse file.

    # cloudfuse -f /mount_directory

    on the another terminal I did a tcpdump on eth1, and all the traffic was going through service network.

    So, “Link has been severed” issue is separate one which happens regardless of you use snet or not. But with the workaround I tested snet parameter is working.

    I used http://c16281.r81.cf2.rackcdn.com/cloudfuse-0.1.tar.gz to build. I’m also inclined towards trying out latest code as well from github.com/redbo as my tar has become quite old.

    I’m sure if you try with RedHat 5 or CentOS 5, snet and everything should be working, I’m going to test that out next.


    Sandeep

  34. June 16, 2012 at 5:24 pm

    Okay..I just build CentOS 5.6 image, and did the whole thing. Everything worked fine! even with use_snet flag. I could confirm all the traffic was going through service interface with tcpdump. And moreover on centos 5.6 I didn’t even get “Link has been severed” error. Everything just worked in the first go.

    Let me know how ur testing goes.

    Make sure you can ping both of these URL successfully.
    storage101.lon3.clouddrive.com
    snet-storage101.lon3.clouddrive.com


    Sandeep

    • Rob Norman
      June 16, 2012 at 10:46 pm

      Hi Sandeep. Thanks for getting back to me. I will try rebuilding from latest source. I used the package from the rackspace cloud page and looks quite old so maybe this is the problem. Will let you know how I get on.

      Thanks,
      Rob

  35. Rob Norman
    June 18, 2012 at 3:02 pm

    Hi Sandeep. I cloned a copy of the code from github and recompiled. Still get the same issue. Unmounted the drive and killed off cloudfuse. Reran cloudfuse and then ran ls -l /cloudfiles and the terminal hung. :-( I did check i could ping those urls you pasted above and they were pinging correctly.

    Not sure what else to try at this point.

  36. June 19, 2012 at 9:27 am

    Hi Rob, sorry to hear that u still having issues. If it makes any difference, I used http://c16281.r81.cf2.rackcdn.com/cloudfuse-0.1.tar.gz to build, and also on Centos5.6. Centos 6 and RedHat 6 has the freeze issue. Which OS u are using?

    Try building from the above tar.gz on a centos5.x and see if it works for you.


    Sandeep

  37. Rob Norman
    June 22, 2012 at 11:54 am

    Hi Sandeep, we have managed to get it working!

    What we have done is start cloudfuse with -f which works. Then detached the process from the terminal. It seems if you start it in the background it doesnt work.

    cloudfuse -f /cloudfiles/ &
    disown -h

    Hope that helps someone else.

    • paivaric
      July 11, 2012 at 1:43 pm

      Tanks a lot… finaly it works for me!

    • September 5, 2012 at 9:39 pm

      Seriously. Thank you. This saved me a load of time.

  38. June 29, 2012 at 12:09 am

    No matter what i try i keep getting permission issues when trying to connect:

    cloudfuse /mnt/cloudbackup

    gives me

    fuse: failed to open /dev/fuse: Permission denied

    /dev/fuse is root:root
    my mount folder /mnt/cloudbackup is also root:root

    i have tried changing the permissions so that the group is fuse and adding root to the group but had no luck.

    if i do modprobe fuse i get

    FATAL: Could not load /lib/modules/2.6.18-028stab091.2/modules.dep: No such file or directory

    i don’t know if that is related or not.

    This last step is killing me can’t figure it out. Running Centos on whm/cpanel box that is virtualized. Any help would be greatly appreciated.

    • July 2, 2012 at 11:10 pm

      gswahhab, could you plz check if you have the fuse module package installed or not?

  39. Kirel
    July 1, 2012 at 4:40 pm

    Hello Sandeep, do you have any idea why I am seeing this error. By the way this is Ubuntu 12.04. The error is when I run make. I install all the packages required and follow the guide perfectly and still can’t get it to work.

    checking for strcasecmp… yes
    checking for strchr… yes
    checking for strdup… yes
    checking for strncasecmp… yes
    checking for strrchr… yes
    checking for strstr… yes
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating config.h
    config.status: config.h is unchanged
    root@ubuntu:/var/cloudfuse-0.1# make //Right here the problem starts.
    gcc -g -O2 -I/usr/include/libxml2 -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse -o cloudfuse cloudfsapi.c cloudfuse.c -lxml2 -lcurl -pthread -lfuse -lrt -ldl
    In file included from cloudfsapi.c:14:0:
    cloudfsapi.h:5:24: fatal error: curl/types.h: No such file or directory
    compilation terminated.
    In file included from cloudfuse.c:17:0:
    cloudfsapi.h:5:24: fatal error: curl/types.h: No such file or directory
    compilation terminated.
    make: *** [cloudfuse] Error 1

    • July 2, 2012 at 11:13 pm

      Kirel, it looks like you are missing some dev library files for compilation. could you do a package search for curl dev files and install those packages.
      like for e.g. “aptitude search curl | grep dev” then look for any packages which looks like curl-dev or curllib and install that. This might solve the problem of curl/types.h file. .h files are header files and part of dev packages and needed for compilation. –Sandeep

      • AR
        August 31, 2012 at 3:15 pm

        no luck… I tried most of these that made sense to try:

        root@p12:~# aptitude search curl | grep dev
        v libcurl-dev –
        v libcurl-dev:i386 –
        p libcurl-ocaml-dev – OCaml libcurl bindings (Development packag
        p libcurl-ocaml-dev:i386 – OCaml libcurl bindings (Development packag
        v libcurl-ocaml-dev-g55y9 –
        v libcurl-ocaml-dev-owsj4:i386 –
        v libcurl-ssl-dev –
        v libcurl-ssl-dev:i386 –
        v libcurl3-dev –
        v libcurl3-dev:i386 –
        v libcurl3-gnutls-dev –
        v libcurl3-gnutls-dev:i386 –
        v libcurl3-nss-dev –
        v libcurl3-nss-dev:i386 –
        v libcurl3-openssl-dev –
        v libcurl3-openssl-dev:i386 –
        v libcurl4-dev –
        v libcurl4-dev:i386 –
        p libcurl4-gnutls-dev – Development files and documentation for li
        p libcurl4-gnutls-dev:i386 – Development files and documentation for li
        p libcurl4-nss-dev – Development files and documentation for li
        p libcurl4-nss-dev:i386 – Development files and documentation for li
        i libcurl4-openssl-dev – Development files and documentation for li
        p libcurl4-openssl-dev:i386 – Development files and documentation for li
        p libflickcurl-dev – C library for accessing the Flickr API – d
        p libflickcurl-dev:i386 – C library for accessing the Flickr API – d
        p libghc-curl-dev – GHC libraries for the libcurl Haskell bind
        p libghc-curl-dev:i386 – GHC libraries for the libcurl Haskell bind
        v libghc-curl-dev-1.3.7-134ce:i38 –
        v libghc-curl-dev-1.3.7-26a38 –
        p libghc-download-curl-dev – High-level file download based on URLs
        p libghc-download-curl-dev:i386 – High-level file download based on URLs
        v libghc-download-curl-dev-0.1.3. –
        v libghc-download-curl-dev-0.1.3. –
        p libghc-hxt-curl-dev – LibCurl interface for HXT
        p libghc-hxt-curl-dev:i386 – LibCurl interface for HXT
        v libghc-hxt-curl-dev-9.1.1-66e48 –
        v libghc-hxt-curl-dev-9.1.1-66e48 –
        p libghc6-curl-dev – transitional dummy package
        p liblua5.1-curl-dev – libcURL development files for the Lua lang
        p liblua5.1-curl-dev:i386 – libcURL development files for the Lua lang

        getting same error

      • September 1, 2012 at 11:59 pm

        Just remove #include from cloudfsapi.h. It’s deprecated these days.

      • October 26, 2012 at 8:44 pm

        I am having the same problem with curl/types.h. I tried christienrioux’s suggestion to modify cloudfsapi.h, but I can’t find cloudfsapi.h
        jeffsilverman@xyzzy:~/stormfs$ find . -name “cloud*” -print
        jeffsilverman@xyzzy:~/stormfs$

        Any suggestions? Could somebody just E-mail me the file?

      • October 26, 2012 at 9:11 pm

        I modified the file src/curl.c to remove the #include . Do not confuse curl/types.h with sys/types.h, they are different files.

  40. Brad
    July 11, 2012 at 6:09 am

    Hey Sandeepsi,

    Got this working on Centos 5.6 no problem, almost everything works well, except for when I FTP into the server the /mnt/cloudfiles/ folder is invisible to the FTP client. Any idea why? (BTW: sFTP works no problem, but I need FTP to work with another app.)

    • July 11, 2012 at 9:08 am

      Hi Brad, Frankly I don’t know why FTP server can’t see files. I have approved the comment on page, may be somebody else provide more info if they encountered this problem. My initial thought process says, it must have something to how the directories are accessed. for e.g. when u logged into the system, fuse module is loaded in the kernel and it see that you requesting listing of files (say ls command) in it’s mounted directory and it handles that request and shows the file names. I’m thinking FTP server might not have access to the fuse module and fails to get listing of files. You can try running your FTP server has different user and see if it works. It must have something to do with how the FTP server requests/accesses directories on disk.


      Sandeep

  41. July 17, 2012 at 7:55 pm

    Hey Kirel, I had the same issue. I got the latest code off Github and it works without that error. I think those are obsolete headers that are no longer packaged by Ubuntu or Debian.

  42. Chris
    August 22, 2012 at 10:43 pm

    Great article. Folowef it aoong with avspecific CentOS one and it seemed to install okay, but i bet i’ve then messed something up, so two newbie questions:

    Do I put a literal $HOME/.cloudfuse in my /root/.cloudfuse file because I have done.

    Secondly, how to I unmount a fused …. “fusedrive” What do we call it? I have several containers I wanted mounted?

    Thanks!

  43. September 19, 2012 at 6:42 pm

    Hello Sandeep,
    Thank you for your tool. Am getting an error in RHEL6
    : Link has been severed
    Somebody compiled a binary that works here

    http://www.transdimensia.ravenhurst.com/2012/08/cloudfuse-on-cenos6.html

    Something to do with libcurl? Any way to write a fix?

    • September 19, 2012 at 7:03 pm

      Hi dylb0t, thanks for trying out. I’m aware of the RHEL6 issue, if you look through the previous comments, you will see I have tried that as well, but it was failing due to new libcurl versions. So, that guy basically statically linked different version while compiling. Basically, you can do the same thing.

      Besides I don’t own the cloudfuse code and not sure what needs to be done to make is RHEL6 compatible, also a bit short on time these days to delve into it. May be it’s better to open an issue on the github page of cloudfuse and see if developer can fix it.


      Sandeep

  44. Brad
    July 19, 2013 at 4:26 am

    How can I change the TMPDIR to another dir other than /tmp? Files bigger than /tmp fail because of lack of space for cache.

  45. Michael
    August 6, 2013 at 2:25 pm

    Has anyone come up with a way to use cloudfuse with two different data centers? It looks right now like I can only connect to the default DC, which means I can only see half my files.

    This is the same issue that exists with CyberDuck. It makes both great products totally useless to me.

    Thanx!

  46. zz
    March 26, 2014 at 6:39 pm

    Same problem #59 (“cannot find types.h”). tried posted comments (“remove #include from cloudfsapi.h”), and it does not work. Any solution?

  47. May 22, 2014 at 11:35 am

    Hey all, everyone who had from for ( curl/types.h: No such file or directory).
    you can clone using:
    git clone git://github.com/redbo/cloudfuse
    and most probably, your problem will be solved. :)

  1. April 22, 2011 at 2:41 pm
  2. June 3, 2011 at 8:20 pm
  3. December 1, 2011 at 4:14 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: