Ceph is an open-source distributed petascale storage stack. We offer top-notch expertise, on-site and remote consultancy services, and training around Ceph.
What is Ceph about?
Ceph is an integrated storage stack offering object storage, block storage, and a POSIX compliant distributed filesystem. It offers massive scalability, configurable synchronous replication, and n-way redundancy.
RADOS, the reliable autonomic distributed object store is a massively distributed, replicating, rack-aware object store. This object store uses a deterministic placement algorithm, CRUSH (Controlled Replication Under Scaleable Hashing). There's never a central instance to ask on every access, instead, everything can work out where objects are. That means the store scales out seamlessly, and can expand and contract on the admin's whim.
What can I use it for?
Lots of things.
radosgw provides a RESTful API for dynamic cloud storage. And it includes an S3 and Swift frontend to act as object storage for AWS/Eucalyptus and OpenStack clouds, respectively.
Qemu-RBD is a storage driver for the Qemu/KVM hypervisor (fully integrated with libvirt) that allows the hypervisor to access replicated block devices that are also striped across the object store – with a configurable number of replicas, of course.
RBD is a Linux block device that, again, is striped and replicated over the object store.
librados (C) and libradospp (C++) are APIs to access the object store programmatically, and come with a number of scripting language bindings. Qemu-RBD builds on librados.
CephFS (the filesystem) exposes POSIX filesystem semantics built on top of RADOS, where all POSIX-related metadata is again stored in the object store. This is a remarkably thin client layer at just 17,000 LOC (compare to GFS2 at 26,000 and OCFS2 at 68,000).
How is it licensed?
All of Ceph is 100% open source, everything is LGPL 2.1 licensed.