[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130204124715.GF7523@quack.suse.cz>
Date: Mon, 4 Feb 2013 13:47:15 +0100
From: Jan Kara <jack@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jan Kara <jack@...e.cz>, LKML <linux-kernel@...r.kernel.org>,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 2/6] fs: Take mapping lock in generic read paths
On Thu 31-01-13 15:59:40, Andrew Morton wrote:
> On Thu, 31 Jan 2013 22:49:50 +0100
> Jan Kara <jack@...e.cz> wrote:
>
> > Add mapping lock to struct address_space and grab it in all paths
> > creating pages in page cache to read data into them. That means buffered
> > read, readahead, and page fault code.
>
> Boy, this does look expensive in both speed and space.
I'm not sure I'll be able to do much with the space cost but hopefully
the CPU cost could be reduced.
> As you pointed out in [0/n], it's 2-3%. As always with pagecache
> stuff, the cost of filling the page generally swamps any inefficiencies
> in preparing that page.
Yes, I measured it with with ramdisk backed fs exactly to remove the cost
of filling the page from the picture. But there are systems where IO is CPU
bound (e.g. when you have PCIe attached devices) and although there is the
additional cost of block layer which will further hide the cost of page
cache itself I assume the added 2-3% incurred by page cache itself will be
measurable on such systems. So that's why I'd like to reduce the CPU cost
of range locking.
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists