lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0704281403140.12122@schroedinger.engr.sgi.com>
Date:	Sat, 28 Apr 2007 14:12:28 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	arjan@...ux.intel.com
cc:	akpm@...l.org, linux-kernel@...r.kernel
Subject: Please revert [PATCH] user of the jiffies rounding patch: Slab

The slab reaper takes global locks. If one makes all cache reapers fire at 
the same time as this patch does then there will be a lot of contention 
that may result lots of interrupt holdoffs since some locks are taken
with interrupts disabled. The vm statistics counters are updated
and will content for global cachelines if this is done.

The approach is fine up to 2 cpus. With 2 cpus we can schedule one cache 
reaper on each cpu each second. So I guess that this was not noticed.

For 16 cpus we are already scheduling 8 parallel cache reapers every 
second. In a 512 cpu system desaster strikes with 256 cache reapers being 
fired off at the same time each second

Git commit 
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2b2842146cb4105877c2be51d3857ec61ebd4ff9

This is in 2.6.20 / 2.6.21.

I'd suggest to use a staggered per cpu approach instead that runs multiple 
per cpu timers at once. But every batch of these timers must be run at an 
offset from each other. Not at the same time please.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ