[<prev] [next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0804291419280.15314@cobra.newdream.net>
Date: Wed, 7 May 2008 15:32:57 -0700 (PDT)
From: Sage Weil <sage@...dream.net>
To: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
ceph-devel@...ts.sf.net
Subject: [ANNOUNCE] Ceph distributed file system v0.2
Hello everyone,
Ceph is a distributed file system designed for performance, reliability,
and scalability. Basic features include:
* POSIX semantics
* Seamless scaling from 1 to many thousands of nodes
* No single point of failure
* N-way replication of data across storage nodes
* Fast recovery from node failures
* Automatic rebalancing of data on node addition/removal
* Easy deployment: most FS components are userspace daemons
* Linux kernel client, FUSE-based client, and user library
Notable in this release is a reasonably stable Linux kernel client. It
can extract and compile a kernel, and passes all tests in the fstest POSIX
regression test suite posted a few weeks back. It has stabilized to the
point where it could use some broader testing and code review.
More info at
http://ceph.newdream.net
Source code at
git://ceph.newdream.net/ceph.git
http://ceph.newdream.net/git
Since v0.1:
* fully functional (and reasonably stable) kernel client
* NFS re-export of a ceph client mount
* client metadata leases to keep client cache coherent
* crushtool for managing storage cluster topology
* improved support for storage cluster expansion
* some new tools for mkfs and management
* lots and lots of bug fixes...
Planned for v0.3:
* xattrs
* hardening distributed failure recovery
* large directory support (in client)
* recursive mtime and file size accounting
Some key things on the (medium- to long-term) roadmap:
* locks
* quotas
* btrfs for local object storage on storage nodes
* directory-granularity snapshots
* content-addressible storage
* strong security
sage
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists