lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130208145945.GA10030@quack.suse.cz>
Date:	Fri, 8 Feb 2013 15:59:45 +0100
From:	Jan Kara <jack@...e.cz>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Jan Kara <jack@...e.cz>, LKML <linux-kernel@...r.kernel.org>,
	linux-fsdevel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 2/6] fs: Take mapping lock in generic read paths

On Mon 04-02-13 13:47:15, Jan Kara wrote:
> On Thu 31-01-13 15:59:40, Andrew Morton wrote:
> > On Thu, 31 Jan 2013 22:49:50 +0100
> > Jan Kara <jack@...e.cz> wrote:
> > 
> > > Add mapping lock to struct address_space and grab it in all paths
> > > creating pages in page cache to read data into them. That means buffered
> > > read, readahead, and page fault code.
> > 
> > Boy, this does look expensive in both speed and space.
>   I'm not sure I'll be able to do much with the space cost but hopefully
> the CPU cost could be reduced.
> 
> > As you pointed out in [0/n], it's 2-3%.  As always with pagecache
> > stuff, the cost of filling the page generally swamps any inefficiencies
> > in preparing that page.
>   Yes, I measured it with with ramdisk backed fs exactly to remove the cost
> of filling the page from the picture. But there are systems where IO is CPU
> bound (e.g. when you have PCIe attached devices) and although there is the
> additional cost of block layer which will further hide the cost of page
> cache itself I assume the added 2-3% incurred by page cache itself will be
> measurable on such systems. So that's why I'd like to reduce the CPU cost
> of range locking.
  So I played a bit more with the code and I was able to reduce the space
cost to a single pointer in struct address_space and unmeasurable impact in
write path. I still see ~ 1% regression in the read path and I'm not sure
why that is as the fast path now only adds a test for one value. But maybe
there's some thinko somewhere. Anyway I'm optimistic that at least in
the current form the code could be massaged so that the CPU cost is in the
noise.

I write "in the current form" because as Dave Chinner pointed out we need
to lock the whole range used by write() at once to ever have a chance to
drop i_mutex and that will require some non-trivial changes. So I'll be
looking into that now...

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ