lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <523103BA.7010202@sr71.net>
Date:	Wed, 11 Sep 2013 16:58:50 -0700
From:	Dave Hansen <dave@...1.net>
To:	Cody P Schafer <cody@...ux.vnet.ibm.com>
CC:	linux-mm@...ck.org, linux-kernel@...r.kernel.org, cl@...ux.com
Subject: Re: [RFC][PATCH] mm: percpu pages: up batch size to fix arithmetic??
 errror

On 09/11/2013 04:08 PM, Cody P Schafer wrote:
> So we have this variable called "batch", and the code is trying to store
> the _average_ number of pcp pages we want into it (not the batchsize),
> and then we divide our "average" goal by 4 to get a batchsize. All the
> comments refer to the size of the pcp pagesets, not to the pcp pageset
> batchsize.

That's a good point, I guess.  I was wondering the same thing.

> Looking further, in current code we don't refill the pcp pagesets unless
> they are completely empty (->low was removed a while ago), and then we
> only add ->batch pages.
> 
> Has anyone looked at what type of average pcp sizing the current code
> results in?

It tends to be within a batch of either ->high (when we are freeing lots
of pages) or ->low (when alloc'ing lots).  I don't see a whole lot of
bouncing around in the middle.  For instance, there aren't a lot of gcc
or make instances during a kernel compile that fit in to the ~0.75MB
->high limit.

Just a dumb little thing like this during a kernel compile on my 4-cpu
laptop:

 while true; do cat /proc/zoneinfo  | egrep 'count:' | tail -4; done >
pcp-counts.1.txt
cat pcp-counts.1.txt | awk '{print $2}' | sort -n | uniq -c | sort -n

says that at least ~1/2 of the time we have <=10 pages.  That makes
sense since the compile spends all of its runtime (relatively slowly)
doing allocations.  It frees all its memory really quickly when it
exits, so the window to see the times when the pools are full is smaller
than when they are empty.

I'm struggling to think of a case where the small batch sizes make sense
these days.  Maybe if you're running a lot of little programs like ls or
awk?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ