Ceph

Ceph is an open-source distributed petascale storage stack. We offer top-notch expertise, on-site and remote consultancy services, and training around Ceph.

What is Ceph about?

Ceph is an integrated storage stack offering object storage, block storage, and a POSIX compliant distributed filesystem. It offers massive scalability, configurable synchronous replication, and n-way redundancy.

RADOS, the reliable autonomic distributed object store is a massively distributed, replicating, rack-aware object store. This object store uses a deterministic placement algorithm, CRUSH (Controlled Replication Under Scaleable Hashing). There's never a central instance to ask on every access, instead, everything can work out where objects are. That means the store scales out seamlessly, and can expand and contract on the admin's whim.

What can I use it for?

Lots of things.

  • radosgw provides a RESTful API for dynamic cloud storage. And it includes an S3 and Swift frontend to act as object storage for AWS/Eucalyptus and OpenStack clouds, respectively.

  • Qemu-RBD is a storage driver for the Qemu/KVM hypervisor (fully integrated with libvirt) that allows the hypervisor to access replicated block devices that are also striped across the object store – with a configurable number of replicas, of course.

  • RBD is a Linux block device that, again, is striped and replicated over the object store.

  • librados (C) and libradospp (C++) are APIs to access the object store programmatically, and come with a number of scripting language bindings. Qemu-RBD builds on librados.

  • CephFS (the filesystem) exposes POSIX filesystem semantics built on top of RADOS, where all POSIX-related metadata is again stored in the object store. This is a remarkably thin client layer at just 17,000 LOC (compare to GFS2 at 26,000 and OCFS2 at 68,000).

How is it licensed?

All of Ceph is 100% open source, everything is LGPL 2.1 licensed.


Removing buckets in radosgw (and their contents)

More recommendations for Ceph and OpenStack

Importing an existing Ceph RBD image into Glance

The Dos and Don'ts for Ceph for OpenStack

CephFS and LXC: Container High Availability and Scalability, Redefined

Five Days of Ceph at PSA Peugeot Citroën

HX212 is live! Our hottest Ceph course just got better.

REIFF boosts agility and innovation with OpenStack

University of Innsbruck creates scalable and agile infrastructure with Ceph and OpenStack

Hosting a web site in radosgw

A Python one-liner for pretty-printing radosgw utilization

Understanding radosgw benchmarks

BIT.group selects hastexo for Ceph training

Ceph Tech Talk: Placement Groups

Spreading knowledge: OpenStack and Ceph in New Zealand and Germany

Have Data, Want Scale, Indefinitely: Exploring Ceph

Ceph Performance Demystified: Benchmarks, Tools, and the Metrics that Matter

Catch Martin Loschwitz and Syed Armani talking about Ceph this week!

OpenStack & Ceph: A perfect match?

Fun with extended attributes in Ceph Dumpling

Unrecoverable unfound objects in Ceph 0.67 and earlier

OpenStack Italia User Group Meetup, Milan, June 20

Ceph: object storage, block storage, file system, replication, massive scalability and then some!

Ceph: The Storage Stack for OpenStack

Enter the cuttlefish!

Inktank & hastexo announce partnership on Ceph

On our cooperation with Inktank

Auf Draht in der Wolke: Das hastexo Cloud BootCamp for OpenStack kommt nach München!

Advance info for tomorrow's Ceph tutorial at linux.conf.au

Solid-state drives and Ceph OSD journals

GUUG FFG, CLT, OSDC ... Conferences coming up early 2013

Hands-On With Ceph

Talking Ceph and GlusterFS at LinuxCon Europe

Migrating virtual machines from block-based storage to RADOS/Ceph

One year in!

EPAM Systems selects hastexo for Ceph training

Speaking and BoFing at CloudOpen in San Diego!

OpenStack high availability and Ceph at OSCON!

Configuring radosgw to behave like Amazon S3

An exciting day for the Ceph community

GlusterFS und Ceph. Skalierbares Storage ohne Wenn und Aber.

Finding out which OSDs currently store a specific RADOS object

Ceph: tickling my geek genes

hastexo is coming to CeBIT 2012!