lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Jul 2011 16:28:19 -0700
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Hugh Dickins <hughd@...gle.com>
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 2/3] tmpfs radix_tree: locate_item to speed up swapoff

On Tue, 19 Jul 2011 15:54:23 -0700 (PDT)
Hugh Dickins <hughd@...gle.com> wrote:

> We have already acknowledged that swapoff of a tmpfs file is slower
> than it was before conversion to the generic radix_tree: a little
> slower there will be acceptable, if the hotter paths are faster.
> 
> But it was a shock to find swapoff of a 500MB file 20 times slower
> on my laptop, taking 10 minutes; and at that rate it significantly
> slows down my testing.

So it used to take half a minute?  That was already awful.  Why?  Was
it IO-bound?  It doesn't sound like it.

> Now, most of that turned out to be overhead from PROVE_LOCKING and
> PROVE_RCU: without those it was only 4 times slower than before;
> and more realistic tests on other machines don't fare as badly.

What's unrealistic about doing swapoff of a 500MB tmpfs file?

Also, confused.  You're talking about creating a regular file on tmpfs
and then using that as a swapfile?  If so, that's a
kernel-hacker-curiosity only?

> I've tried a number of things to improve it, including tagging the
> swap entries, then doing lookup by tag: I'd expected that to halve
> the time, but in practice it's erratic, and often counter-productive.
> 
> The only change I've so far found to make a consistent improvement,
> is to short-circuit the way we go back and forth, gang lookup packing
> entries into the array supplied, then shmem scanning that array for the
> target entry.  Scanning in place doubles the speed, so it's now only
> twice as slow as before (or three times slower when the PROVEs are on).
> 
> So, add radix_tree_locate_item() as an expedient, once-off, single-caller
> hack to do the lookup directly in place.  #ifdef it on CONFIG_SHMEM and
> CONFIG_SWAP, as much to document its limited applicability as save space
> in other configurations.  And, sadly, #include sched.h for cond_resched().
> 

How much did that 10 minutes improve?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ