lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FAC5C87.3060504@kernel.org>
Date:	Fri, 11 May 2012 09:25:43 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Dan Magenheimer <dan.magenheimer@...cle.com>
CC:	Nitin Gupta <ngupta@...are.org>, Pekka Enberg <penberg@...nel.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	cl@...ux-foundation.org
Subject: Re: [PATCH 4/4] zsmalloc: zsmalloc: align cache line size

On 05/11/2012 09:03 AM, Dan Magenheimer wrote:

>> From: Minchan Kim [mailto:minchan@...nel.org]
>> Subject: Re: [PATCH 4/4] zsmalloc: zsmalloc: align cache line size
>>
>> On 05/08/2012 11:00 PM, Dan Magenheimer wrote:
>>
>>>> From: Minchan Kim [mailto:minchan@...nel.org]
>>>>> zcache can potentially create a lot of pools, so the latter will save
>>>>> some memory.
>>>>
>>>>
>>>> Dumb question.
>>>> Why should we create pool per user?
>>>> What's the problem if there is only one pool in system?
>>>
>>> zcache doesn't use zsmalloc for cleancache pages today, but
>>> that's Seth's plan for the future.  Then if there is a
>>> separate pool for each cleancache pool, when a filesystem
>>> is umount'ed, it isn't necessary to walk through and delete
>>> all pages one-by-one, which could take quite awhile.
>>
>>>
>>
>>> ramster needs one pool for each client (i.e. machine in the
>>> cluster) for frontswap pages for the same reason, and
>>> later, for cleancache pages, one per mounted filesystem
>>> per client
>>
>>
>> Fair enough.
>> But some subsystems can't want a own pool for not waste unnecessary memory.
>>
>> Then, how about this interfaces like slab?
>>
>> 1. zs_handle zs_malloc(size_t size, gfp_t flags) - share a pool by many subsystem(like kmalloc)
>> 2. zs_handle zs_malloc_pool(struct zs_pool *pool, size_t size) - use own pool(like kmem_cache_alloc)
>>
>> Any thoughts?
> 
> I don't have any objections to adding this kind of
> capability to zsmalloc.  But since we are just speculating
> that this capability would be used by some future
> kernel subsystem, isn't it normal kernel protocol for
> this new capability NOT to be added until that future
> kernel subsystem creates a need for it.


Now zram makes pool per block device and a embedded system may use zram
for several block device, ex) swap device, several compressed tmpfs
In such case, share pool is better than private pool because embedded system
don't mount/umount frequently on such directories since booting.

> 
> As I said in reply to the other thread, there is missing
> functionality in zsmalloc that is making it difficult for
> it to be used by zcache.  It would be good if Seth
> and Nitin (and any other kernel developers) would work


So, if you guys post TODO list, it helps fix the direction.

> on those issues before adding capabilities for non-existent
> future users of zsmalloc.


I think it's not urgent than zs_handle mess.

> 
> Again, that's just my opinion.

> Dan
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=ilto:"dont@...ck.org"> email@...ck.org </a>
> 



-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ