lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Jul 2013 09:31:42 -0700
From:	Dave Hansen <dave@...1.net>
To:	Joonsoo Kim <iamjoonsoo.kim@....com>
CC:	Dave Hansen <dave.hansen@...el.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>,
	David Rientjes <rientjes@...gle.com>,
	Glauber Costa <glommer@...il.com>,
	Johannes Weiner <hannes@...xchg.org>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>,
	Hugh Dickins <hughd@...gle.com>,
	Minchan Kim <minchan@...nel.org>,
	Jiang Liu <jiang.liu@...wei.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 1/5] mm, page_alloc: support multiple pages allocation

On 07/10/2013 11:12 PM, Joonsoo Kim wrote:
> On Wed, Jul 10, 2013 at 10:38:20PM -0700, Dave Hansen wrote:
>> You're probably right for small numbers of pages.  But, if we're talking
>> about things that are more than, say, 100 pages (isn't the pcp batch
>> size clamped to 128 4k pages?) you surely don't want to be doing
>> buffered_rmqueue().
> 
> Yes, you are right.
> Firstly, I thought that I can use this for readahead. On my machine,
> readahead reads (maximum) 32 pages in advance if faulted. And batch size
> of percpu pages list is close to or larger than 32 pages
> on today's machine. So I didn't consider more than 32 pages before.
> But to cope with a request for more pages, using rmqueue_bulk() is
> a right way. How about using rmqueue_bulk() conditionally?

How about you test it both ways and see what is faster?

> Hmm, rmqueue_bulk() doesn't stop until all requested pages are allocated.
> If we request too many pages (1024 pages or more), interrupt latency can
> be a problem.

OK, so only call it for the number of pages you believe allows it to
have acceptable interrupt latency.  If you want 200 pages, and you can
only disable interrupts for 100 pages, then just do it in two batches.

The point is that you want to avoid messing with the buffering by the
percpu structures.  They're just overhead in your case.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ