lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200709170713.31945.nickpiggin@yahoo.com.au>
Date:	Mon, 17 Sep 2007 07:13:30 +1000
From:	Nick Piggin <nickpiggin@...oo.com.au>
To:	David Chinner <dgc@....com>
Cc:	Mel Gorman <mel@...net.ie>, Christoph Lameter <clameter@....com>,
	andrea@...e.de, torvalds@...ux-foundation.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	Christoph Hellwig <hch@....de>,
	William Lee Irwin III <wli@...omorphy.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Badari Pulavarty <pbadari@...il.com>,
	Maxim Levitsky <maximlevitsky@...il.com>,
	Fengguang Wu <fengguang.wu@...il.com>,
	swin wang <wangswin@...il.com>, totty.lu@...il.com,
	hugh@...itas.com, joern@...ybastard.org
Subject: Re: [00/41] Large Blocksize Support V7 (adds memmap support)

On Monday 17 September 2007 14:07, David Chinner wrote:
> On Fri, Sep 14, 2007 at 06:48:55AM +1000, Nick Piggin wrote:

> > OK, the vunmap batching code wipes your TLB flushing and IPIs off
> > the table. Diffstat below, but the TLB portions are here (besides that
> > _everything_ is probably lower due to less TLB misses caused by the
> > TLB flushing):
> >
> >       -170   -99.4% sn2_send_IPI
> >       -343  -100.0% sn_send_IPI_phys
> >     -17911   -99.9% smp_call_function
> >
> > Total performance went up by 30% on a 64-way system (248 seconds to
> > 172 seconds to run parallel finds over different huge directories).
>
> Good start, Nick ;)

I didn't have the chance to test against a 16K directory block size to find
the "optimal" performance, but it is something I will do (I'm sure it will be
still a _lot_ faster than 172 seconds :)).


> >      23012  54790.5% _read_lock
> >       9427   329.0% __get_vm_area_node
> >       5792     0.0% __find_vm_area
> >       1590  53000.0% __vunmap
>
> ....
>
> _read_lock? I though vmap() and vunmap() only took the vmlist_lock in
> write mode.....

Yeah, it is a slight change... the lazy vunmap only has to take it for read.
In practice, I'm not sure that it helps a great deal because everything else
still takes the lock for write. But that explains why it's popped up in the
profile.


> > Next I have some patches to scale the vmap locks and data
> > structures better, but they're not quite ready yet. This looks like it
> > should result in a further speedup of several times when combined
> > with the TLB flushing reductions here...
>
> Sounds promising - when you have patches that work, send them my
> way and I'll run some tests on them.

Still away from home (for the next 2 weeks), so I'll be going a bit slow :P
I'm thinking about various scalable locking schemes and I'll definitely
ping you when I've made a bit of progress.

Thanks,
Nick
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ