lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5310D060.1090504@surriel.com>
Date:	Fri, 28 Feb 2014 13:07:28 -0500
From:	Rik van Riel <riel@...riel.com>
To:	Hugh Dickins <hughd@...gle.com>,
	Kelley Nielsen <kelleynnn@...il.com>
CC:	akpm@...ux-foundation.org, gnomes@...rguk.ukuu.org.uk,
	josh@...htriplett.org, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, opw-kernel@...glegroups.com,
	jamieliu@...gle.com, sjenning@...ux.vnet.ibm.com
Subject: Re: [RFC] mm:prototype for the updated swapoff implementation

On 02/27/2014 07:33 PM, Hugh Dickins wrote:
> On Tue, 18 Feb 2014, Kelley Nielsen wrote:
> 
>> The function try_to_unuse() is of quadratic complexity, with a lot of
>> wasted effort. It unuses swap entries one by one, potentially iterating
>> over all the page tables for all the processes in the system for each
>> one.
> 
> You've chosen a good target, and I like the look of what you've done.
> But I'm afraid it will have to get uglier before it's ready, and I'm
> unsure whether your approach will prove to be a clear win or not.

I am more optimistic than you, because I have seen swapoff
on my Nehalem system proceed at under 1MB/s for several hours,
to clear maybe 3-4GB of stuff out of swap :)

>> This new proposed implementation of try_to_unuse simplifies its
>> complexity to linear. It iterates over the system's mms once, unusing
>> all the affected entries as it walks each set of page tables. It also
>> makes similar changes to shmem_unuse.
>>
>> Improvement
>>
>> swapoff was called on a swap partition containing about 50M of data,
>> and calls to the function unuse_pte_range were counted.
>>
>> Present implementation....about 22.5M calls.
>> Prototype.................about  7.0K   calls.
> 
> That's nice, but mostly it's the time spent that matters.
> 
> I should explain why we've left the try_to_unuse() implementation as is
> for so many years: it's a matter of tradeoff between fast cpu and slow
> seeking disk.

> I'll be surprised if your approach does not improve swapoff from SSD
> (and brd and zram and zswap) very significantly; but the case to worry
> about is swapoff from hard disk.  You are changing swapoff to use the
> cpu much more efficiently; but now that you no longer move linearly up
> the swap_map, you are making the disk head seek around very much more.

I suspect proper read-around of the swap area should take care of
IO patterns well enough. The quadratic nature of the current
try_to_unuse search can easily slow things down to comically low
speeds...

-- 
All rights reversed.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ