[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4D109047.8050300@panasas.com>
Date: Tue, 21 Dec 2010 13:32:23 +0200
From: Boaz Harrosh <bharrosh@...asas.com>
To: Christoph Hellwig <hch@...radead.org>
CC: xfs@....sgi.com, linux-kernel@...r.kernel.org
Subject: Re: XFS status update for November 2010
On 12/20/2010 08:00 PM, Christoph Hellwig wrote:
>>>From looking at the kernel git commits November looked like a pretty
> slow month with just two hand full fixes going into the release candidates
> for Linux 2.6.37, and none at all going into the development tree.
> But in this case git statistics didn't tell the whole story - there
> was a lot of activity on patches for the next merge window on the list.
> The focus in November was still at metadata scalability, with various
> patchsets that improves parallel creates and unlinks again, and also
> improves 8-way dbench throughput by 30%. In addition to that there
> were patches to improve preallocation for NFS servers, to simplify
> the writeback code, and to remove the XFS-internal percpu counters
> for free space for the generic kernel percpu counters, which just needed
> a small improvement.
>
> On the user space side we saw the release of xfsprogs 3.1.4, which
> contains various accumulated bug fixes and Debian packaging updates.
> The xfsdump tree saw a large update to speed up restore by using
> mmap for an internal database and remove the limitation of ~ 214
> million directory entries per dump file. The xfstests test suite
> saw three new testcases and various fixes, including support for the
> hfsplus filesystem.
Hi Christoph, happy holidays
I love these reports you do, thank you
I have one small request, could you please post them to
linux-fsdevel as well. linux-kernel@...r.kernel.org is so crowded
I keep missing them.
Thanks
Boaz
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists