lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 08 Jan 2010 17:53:30 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Minchan Kim <minchan.kim@...il.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>, cl@...ux-foundation.org,
	"hugh.dickins" <hugh.dickins@...cali.co.uk>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault()

On Tue, 2010-01-05 at 20:20 -0800, Linus Torvalds wrote:
> 
> On Wed, 6 Jan 2010, KAMEZAWA Hiroyuki wrote:
> > > 
> > > Of course, your other load with MADV_DONTNEED seems to be horrible, and 
> > > has some nasty spinlock issues, but that looks like a separate deal (I 
> > > assume that load is just very hard on the pgtable lock).
> > 
> > It's zone->lock, I guess. My test program avoids pgtable lock problem.
> 
> Yeah, I should have looked more at your callchain. That's nasty. Much 
> worse than the per-mm lock. I thought the page buffering would avoid the 
> zone lock becoming a huge problem, but clearly not in this case.

Right, so I ran some numbers on a multi-socket (2) machine as well:

                               pf/min

-tip                          56398626
-tip + xadd                  174753190
-tip + speculative           189274319
-tip + xadd + speculative    200174641

[ variance is around 0.5% for this workload, ran most of these numbers
with --repeat 5 ]

At both the xadd/speculative point the workload is dominated by the
zone->lock, the xadd+speculative removes some of the contention, and
removing the various RSS counters could yield another few percent
according to the profiles, but then we're pretty much there.

One way around those RSS counters is to track it per task, a quick grep
shows its only the oom-killer and proc that use them.

A quick hack removing them gets us: 203158058

So from a throughput pov. the whole speculative fault thing might not be
interesting until the rest of the vm gets a lift to go along with it.

>>From a blocking on mmap_sem pov. I think Linus is right in that we
should first consider things like dropping mmap_sep around IO and page
zeroing, and generally looking at reducing hold times and such.

So while I think its quite feasible to do these speculative faults, it
appears we're not quite ready for them.

Maybe I can get -rt to carry it for a while, there we have to reduce
mmap_sem to a mutex, which hurts lots.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ