lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 09 Mar 2009 21:45:11 +0200
From:	Avi Kivity <avi@...hat.com>
To:	Jeff Moyer <jmoyer@...hat.com>
CC:	linux-aio <linux-aio@...ck.org>, zach.brown@...cle.com,
	bcrl@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch] aio: remove aio-max-nr and instead use the memlock rlimit
 to limit the number of pages pinned for the aio completion ring

Jeff Moyer wrote:
>> Is it not possible to get rid of the pinning entirely?  Pinning
>> interferes with page migration which is important for NUMA, among
>> other issues.
>>     
>
> aio_complete is called from interrupt handlers, so can't block faulting
> in a page.  Zach mentions there is a possibility of handing completions
> off to a kernel thread, with all of the performance worries and extra
> bookkeeping that go along with such a scheme (to help frame my concerns,
> I often get lambasted over .5% performance regressions).
>   

Or you could queue the completions somewhere, and only copy them to user 
memory when io_getevents() is called.  I think the plan was once to 
allow events to be consumed opportunistically even without 
io_getevents(), though.


> I'm happy to look into such a scheme, should anyone show me data that
> points to this NUMA issue as an actual performance problem today.  In
> the absence of such data, I simply can't justify the work at the moment.
>   

Right now page migration is a dead duck.  Outside HPC, there is now 
support for triggering it or for getting the scheduler to prefer a 
process's memory node.  Only a minority of hosts are NUMA.

I think that will/should change in the near future.  Nehalem-based 
servers mean that NUMA will be commonplace.  The larger core counts will 
mean that hosts will run several unrelated applications (often through 
virtualization); such partitioning can easily benefit from page migration.

> Thanks for taking a look!
>   

Sorry, I didn't actually take a look at the patches.  I only reacted to 
the description - I am allergic to pinned memory.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ