lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1110180933470.687@router.home>
Date:	Tue, 18 Oct 2011 09:36:14 -0500 (CDT)
From:	Christoph Lameter <cl@...two.org>
To:	Dimitri Sivanich <sivanich@....com>
cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mel@....ul.ie>
Subject: Re: [PATCH] Reduce vm_stat cacheline contention in
 __vm_enough_memory

On Tue, 18 Oct 2011, Dimitri Sivanich wrote:

> After further testing, substantial increases in ZVC delta along with cache alignment
> of the vm_stat array bring the tmpfs writeback throughput numbers to about where
> they are with vm.overcommit_memory==OVERCOMMIT_NEVER.  I still need to determine how
> high the ZVC delta needs to be to achieve this performance, but it is greater than 125.

Sounds like this is the way to go then.

> Would it make sense to have the ZVC delta be tuneable (via /proc/sys/vm?), keeping the
> same default behavior as what we currently have?

I think so.

> If the thresholds get set higher, it could be that some values that don't normally have
> as big a delta may not get updated frequently enough.  Should we maybe update all values
> everytime a threshold is hit, as the patch below was intending?

Mel can probably chime in on the accuracy needed for reclaim etc. We
already have an automatic reduction of the delta if the vm gets into
problems.

> Note that having each counter in a separate cacheline does not have much, if any,
> effect.

It may have a good effect if you group the counters according to their
uses into different cachelines. Counters that are typically updates
together need to be close to each other. Also you could modify my patch to
only update counters in the same cacheline. I think doing all counters
caused the problems with that patch because we now touch multiple
cachelines and increase the cache footprint of critical vm functions.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ