Sunday, February 7, 2016

GlusterFS

Using FUSE (Filesystem in Userspace) to hook itself with the VFS (Virtual File System), GlusterFS creates a clutered network filesystem written in userspace, or, outside the kernel and its privileged extensions. GlusterFS uses existing filesystems like ext3, ext4, xfs, etc. to store data. The popularity of GlusterFS comes from the accessibility of a framework that can scale, providing petabytes of data under a single mount point. GlusterFS distributes files across a collection of subvolumes and makes one large storage unit from a host of smaller ones. This can be done across volumes on a single (or several) server. Volume can be increased by adding new servers, essentially on the fly. With replicate functionality, GlusterFS provides redundancy of storage and availability. 

Ceph

Ceph's technical foundation is the Reliable Autonomic Distributed Object Store (RADOS), which provides applications with object, block, and file system storage in a single unified storage cluster. With libraries giving client applications direct access to the RADOS object-based storage system, users can leverage RADOS Block Device (RBD), RADOS Gateway, as well as the Ceph filesystem. The RADOS Gateway provides Amazon S3 and OpenStack compatible interfaces to the RADOS object store. Additionally, POSIX is a key feature in Ceph. POSIX semantics drive the interface with Ceph's traditional filesystem, so applications that use POSIX-compliant filesystems can easily use Ceph's object storage system.  Additional libraries allow apps written in C, C++, Java, Python and PHP to also access the Ceph object storage FS. Advanced features include partial or complete read/writes, snapshots, object level key-value mappings, and atomic transactions with features like append, truncate and clone range. Ceph is also compatible with several VM clients.

0 comments:

Post a Comment

Subscribe to RSS Feed Follow me on Twitter!