lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Fri, 15 Nov 2013 08:47:57 +0800
From:	Bob Liu <bob.liu@...cle.com>
To:	Seth Jennings <sjennings@...iantweb.net>
CC:	Hugh Dickins <hughd@...gle.com>, Minchan Kim <minchan@...nel.org>,
	Greg KH <gregkh@...uxfoundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Jens Axboe <axboe@...nel.dk>, Nitin Gupta <ngupta@...are.org>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>, lliubbo@...il.com,
	jmarchan@...hat.com, mgorman@...e.de, riel@...hat.com,
	linux-mm@...ck.org, linux-kernel <linux-kernel@...r.kernel.org>,
	Luigi Semenzato <semenzato@...gle.com>
Subject: Re: [PATCH] staging: zsmalloc: Ensure handle is never 0 on success


On 11/15/2013 12:21 AM, Seth Jennings wrote:
> On Wed, Nov 13, 2013 at 08:00:34PM -0800, Hugh Dickins wrote:
>> On Wed, 13 Nov 2013, Minchan Kim wrote:
>> ...
>>>
>>> Hello Andrew,
>>>
>>> I'd like to listen your opinion.
>>>
>>> The zram promotion trial started since Aug 2012 and I already have get many
>>> Acked/Reviewed feedback and positive feedback from Rik and Bob in this thread.
>>> (ex, Jens Axboe[1], Konrad Rzeszutek Wilk[2], Nitin Gupta[3], Pekka Enberg[4])
>>> In Linuxcon, Hugh gave positive feedback about zram(Hugh, If I misunderstood,
>>> please correct me!). And there are lots of users already in embedded industry
>>> ex, (most of TV in the world, Chromebook, CyanogenMod, Android Kitkat.)
>>> They are not idiot. Zram is really effective for embedded world.
>>
>> Sorry for taking so long to respond, Minchan: no, you do not misrepresent
>> me at all.  Promotion of zram and zsmalloc from staging is way overdue:
>> they long ago proved their worth, look tidy, and have an active maintainer.
>>
>> Putting them into drivers/staging was always a mistake, and I quite
>> understand Greg's impatience with them by now; but please let's move
>> them to where they belong instead of removing them.
>>
>> I would not have lent support to zswap if I'd thought that was going to
>> block zram.  And I was not the only one surprised when zswap replaced its
>> use of zsmalloc by zbud: we had rather expected a zbud option to be added,
>> and I still assume that zsmalloc support will be added back to zswap later.
> 
> Yes, it is still the plan to reintroduce zsmalloc as an option (possibly
> _the_ option) for zswap.
> 
> An idea being tossed around is making zswap writethrough instead of
> delayed writeback.
>
> Doing this would be mean that zswap would no longer reduce swap out
> traffic, but would continue to reduce swap in latency by reading out of
> the compressed cache instead of the swap device.
> 
> For that loss, we gain a benefit: the compressed pages in the cache are
> clean, meaning we can reclaim them at any time with no writeback
> cost.  This addresses Mel's initial concern (the one that led to zswap
> moving to zbud) about writeback latency when the zswap pool is full.
> 

Agree!

> If there is no writeback cost for reclaiming space in the compressed
> pool, then we can use higher density packing like zsmalloc.
> 

But zsmalloc will compact several 0-order pages together as a zpage
which cause it not easy to reclaim one 0-order page directly from it.
Especially if we want to make zswap pool can be dynamically managed in
future.

> Making zswap writethough would also make the difference between zswap
> and zram, both in terms of operation and application, more apparent,
> demonstrating the need for both.
> 

-- 
Regards,
-Bob
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ