[<prev] [next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.1101121530360.6611@cobra.newdream.net>
Date: Wed, 12 Jan 2011 16:05:48 -0800 (PST)
From: Sage Weil <sage@...dream.net>
To: torvalds@...ux-foundation.org
cc: linux-kernel@...r.kernel.org, ceph-devel@...r.kernel.org
Subject: [GIT PULL] Ceph changes for 2.6.38-rc1
Hi Linus,
Please pull the following Ceph updates for 2.6.38 from
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus
There are a few error path fixes (libceph and rbd), some ceph code
cleanup, improved support for (server-side) directory hashing, and some
patches from Tejun switching to the new workqueue API and removing
unnecessary hijinks that the workqueues can handle themselves.
Thanks!
sage
Jesper Juhl (1):
ceph: Always free allocated memory in osdmap_decode()
Sage Weil (4):
ceph: add dir_layout to inode
ceph: implement DIRLAYOUTHASH feature to get dir layout from MDS
ceph: drop redundant r_mds field
ceph: associate requests with opening sessions
Tejun Heo (2):
ceph: fsc->*_wq's aren't used in memory reclaim path
net/ceph: make ceph_msgr_wq non-reentrant
Tracey Dent (1):
ceph: Makefile: Remove unnessary code
Yehuda Sadeh (1):
rbd: fix cleanup when trying to mount inexistent image
drivers/block/rbd.c | 19 ++++++++++---
fs/ceph/Makefile | 23 +---------------
fs/ceph/debugfs.c | 9 ++++--
fs/ceph/dir.c | 20 ++++++++++++++
fs/ceph/export.c | 2 +-
fs/ceph/inode.c | 4 +++
fs/ceph/mds_client.c | 56 +++++++++++++++++++++++++--------------
fs/ceph/mds_client.h | 2 +-
fs/ceph/super.c | 13 ++++++---
fs/ceph/super.h | 2 +
include/linux/ceph/ceph_fs.h | 16 +++++++++--
include/linux/ceph/messenger.h | 5 ---
net/ceph/ceph_hash.c | 3 ++
net/ceph/messenger.c | 46 +-------------------------------
net/ceph/osdmap.c | 4 ++-
15 files changed, 116 insertions(+), 108 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists