lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 8 May 2012 20:08:29 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Nitin Gupta <ngupta@...are.org>, Pekka Enberg <penberg@...nel.org>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	cl@...ux-foundation.org
Subject: RE: [PATCH 4/4] zsmalloc: zsmalloc: align cache line size

> From: Minchan Kim [mailto:minchan@...nel.org]
> Subject: Re: [PATCH 4/4] zsmalloc: zsmalloc: align cache line size
> 
> On 05/08/2012 11:00 PM, Dan Magenheimer wrote:
> 
> >> From: Minchan Kim [mailto:minchan@...nel.org]
> >>> zcache can potentially create a lot of pools, so the latter will save
> >>> some memory.
> >>
> >>
> >> Dumb question.
> >> Why should we create pool per user?
> >> What's the problem if there is only one pool in system?
> >
> > zcache doesn't use zsmalloc for cleancache pages today, but
> > that's Seth's plan for the future.  Then if there is a
> > separate pool for each cleancache pool, when a filesystem
> > is umount'ed, it isn't necessary to walk through and delete
> > all pages one-by-one, which could take quite awhile.
> 
> > ramster needs one pool for each client (i.e. machine in the
> > cluster) for frontswap pages for the same reason, and
> > later, for cleancache pages, one per mounted filesystem
> > per client
> 
> Fair enough.
> 
> Then, how about this interfaces like slab?
> 
> 1. zs_handle zs_malloc(size_t size, gfp_t flags) - share a pool by many subsystem(like kmalloc)
> 2. zs_handle zs_malloc_pool(struct zs_pool *pool, size_t size) - use own pool(like kmem_cache_alloc)
> 
> Any thoughts?

Seems fine to me.

> But some subsystems can't want a own pool for not waste unnecessary memory.

Are you using zsmalloc for something else in the kernel?  I'm
wondering what other subsystem would have random size allocations
always less than a page.

Thanks,
Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ