lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200627031304.GC25039@casper.infradead.org>
Date:   Sat, 27 Jun 2020 04:13:04 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Tim Chen <tim.c.chen@...ux.intel.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Vladimir Davydov <vdavydov@...tuozzo.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...e.cz>,
        Dave Hansen <dave.hansen@...el.com>,
        Ying Huang <ying.huang@...el.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [Patch] mm: Increase pagevec size on large system

On Fri, Jun 26, 2020 at 02:23:03PM -0700, Tim Chen wrote:
> Enlarge the pagevec size to 31 to reduce LRU lock contention for
> large systems.
> 
> The LRU lock contention is reduced from 8.9% of total CPU cycles
> to 2.2% of CPU cyles.  And the pmbench throughput increases
> from 88.8 Mpages/sec to 95.1 Mpages/sec.

The downside here is that pagevecs are often stored on the stack (eg
truncate_inode_pages_range()) as well as being used for the LRU list.
On a 64-bit system, this increases the stack usage from 128 to 256 bytes
for this array.

I wonder if we could do something where we transform the ones on the
stack to DECLARE_STACK_PAGEVEC(pvec), and similarly DECLARE_LRU_PAGEVEC
the ones used for the LRUs.  There's plenty of space in the header to
add an unsigned char sz, delete PAGEVEC_SIZE and make it an variable
length struct.

Or maybe our stacks are now big enough that we just don't care.
What do you think?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ