lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A3FA326.8030802@inria.fr>
Date:	Mon, 22 Jun 2009 17:28:38 +0200
From:	Brice Goglin <Brice.Goglin@...ia.fr>
To:	Lee Schermerhorn <Lee.Schermerhorn@...com>
CC:	Stefan Lankes <lankes@...s.rwth-aachen.de>,
	'Andi Kleen' <andi@...stfloor.org>,
	linux-kernel@...r.kernel.org, linux-numa@...r.kernel.org,
	Boris Bierbaum <boris@...s.rwth-aachen.de>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
Subject: Re: [RFC PATCH 0/4]: affinity-on-next-touch

Lee Schermerhorn wrote:
>> I gave this patchset a try and indeed it seems to work fine, thanks a
>> lot. But the migration performance isn't very good. I am seeing about
>> 540MB/s when doing mbind+touch_all_pages on large buffers on a
>> quad-barcelona machines. move_pages gets 640MB/s there. And my own
>> next-touch implementation were near 800MB/s in the past.
>>     
>
> Interesting.  Do you have any idea where the differences come from?  Are
> you comparing them on the same kernel versions?  I don't know the
> details of your implementation, but one possible area is the check for
> "misplacement".  When migrate-on-fault is enabled, I check all pages
> with page_mapcount() == 0 for misplacement in the [swap page] fault
> path.  That, and other filtering to eliminate unnecessary migrations
> could cause extra overhead.
>   

(I'll actually talk about this at the Linux Symposium) I used 2.6.27
initially, with some 2.6.29 patches to fix the throughput of move_pages
for large buffers. So move_pages was getting about 600MB/s there. Then
my own (hacky) next-touch implementation was getting about 800MB/s. The
main difference with your code is that mine only modifies the current
process PTE without touching the other processes if the page is shared.
So my code basically only supports private pages, it duplicates/migrates
them on next-touch. I thought it was faster than move_pages because I
didn't support shared-page migration. But, I found out later that
move_pages could be further improved up to about 750MB/s (it will be in
2.6.31).

So now, I'd expect both the next-touch migration and move_pages to have
similar migration throughput, about 750-800MB/s on my quad-barcelona
machine. Right now, I'm seeing less than that for both, so there might
be a problem deeper. Actually, looking at COW performance when the new
page is allocated on a remote numa node, I also see the throughput much
lower in 2.6.29+ (about 720MB/s) than in 2.6.27 (about 850MB/s). Maybe a
regression in the low-level page copy routine?

Brice

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ