lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m2zkaiyzjl.fsf@firstfloor.org>
Date:	Wed, 11 Apr 2012 09:59:58 -0700
From:	Andi Kleen <andi@...stfloor.org>
To:	Ted Ts'o <tytso@....edu>
Cc:	Vivek Haldar <haldar@...gle.com>,
	Andreas Dilger <adilger@...ger.ca>, linux-ext4@...r.kernel.org,
	tim.c.chen@...ux.intel.com, torvalds@...ux-foundation.org
Subject: Re: [RFC, PATCH] Avoid hot statistics cache line in ext4 extent cache

Ted Ts'o <tytso@....edu> writes:

> On Mon, Mar 26, 2012 at 04:00:47PM -0700, Andi Kleen wrote:
>> On 3/26/2012 3:26 PM, Vivek Haldar wrote:
>> >Andi --
>> >
>> >I realized the problem soon after the original patch, and submitted
>> >another patch to make these per cpu  counters.
>> 
>> Is there a clear use case having these counters on every production system?
>
> Today, with the current single entry extent cache, I don't think
> there's a good justification for it, no.

Ping. This scalability problem is still in 3.4-rc* and causes
major slowdowns.

Can we please revert fix it or revert 
556b27abf73833923d5cd4be80006292e1b31662 before release.

-Andi

(keeping context)

>
> Vivek has worked on a rather more sophisticated extent cache which
> could cache several extent entries (and indeed, combine multiple
> on-disk extent entries into a single in-memory extent).  There are a
> variety of reasons that hasn't gone upstream yet; one of which is
> there are some interesting questions about how to control memory usage
> of the extent cache; how do we trim it back in the case of memory
> pressure?
>
> One of the other things that we need to consider as we think about
> getting this upstream is the "status" or "delayed" extents patches
> which Allison and Yongqiang were looking at.  Does it make sense to
> have two parallel datastructures which are indexed by logical block
> number?  On the one hand, using an in-memory tree structure is pretty
> expensive, just because of all of the 64-bit logical block numbers and
> 64-bit pointers.  On the other hand, would that make things too
> complicated?
>
> Once we start having multiple knobs to adjust, having these counters
> available does make sense.  For now, using a per-cpu counter is
> relatively low cost, except on extreme SGI Altix-like machines with
> hundreds of CPU's, where the memory utilization is something to think
> about.  Given that Vivek has submitted a patch to convert to per-cpu,
> I can see applying it just to fix it; or just removing the stats for
> now until we get the more sophisticated extent cache merged in.
>
>     	     	     	  		       - Ted
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

-- 
ak@...ux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ