lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070224043046.GV21484@holomorphy.com>
Date:	Fri, 23 Feb 2007 20:30:46 -0800
From:	William Lee Irwin III <wli@...omorphy.com>
To:	Christoph Lameter <clameter@...r.sgi.com>
Cc:	Pekka J Enberg <penberg@...helsinki.fi>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC/PATCH] slab: free pages in a batch in drain_freelist

On Thu, 22 Feb 2007, Pekka J Enberg wrote:
>> As suggested by William, free the actual pages in a batch so that we
>> don't keep pounding on l3->list_lock.

On Thu, Feb 22, 2007 at 03:01:30PM -0800, Christoph Lameter wrote:
> This means holding the l3->list_lock for a prolonged time period. The 
> existing code was done this way in order to make sure that the interrupt 
> holdoffs are minimal.
> There is no pounding. The cacheline with the list_lock is typically held 
> until the draining is complete. While we drain the freelist we need to be 
> able to respond to interrupts.

I had in mind something more like a list_splice_init() operation under
the lock, since it empties the entire list except in the case of
cache_reap(). For cache_reap(), not much could be done unless they were
organized into batches of (l3->free_limit+5*searchp->num-1)/(5*searchp->num)
such as a list of lists of that length, which would need to be
reorganized when tuning ->batchcount occurs.

It's not terribly meaningful since only grand reorganizations that are
presumed to stop the world actually get "sped up" without the additional
effort required to improve cache_reap(). My commentary was more about
the data structures being incapable of bulk movement operations for
batching like or analogous to list_splice() than trying to say that
drain_freelist() in particular should be optimized. Allowing movement of
larger batches without increased hold time in transfer_objects() is
clearly a more meaningful goal, for example.

Furthermore, the patch as written merely increases hold time in
exchange for decreased arrival rate resulting in no net improvement.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ