lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <b5b5a961-85e5-4ce1-8280-7ca382cb0e0f@default>
Date:	Wed, 11 Jan 2012 09:19:35 -0800 (PST)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Greg Kroah-Hartman <gregkh@...e.de>
Cc:	Nitin Gupta <ngupta@...are.org>,
	Dan Magenheimer <dan.magenheimer@...cle.com>,
	Brian King <brking@...ux.vnet.ibm.com>,
	Konrad Wilk <konrad.wilk@...cle.com>,
	Dave Hansen <dave@...ux.vnet.ibm.com>, linux-mm@...ck.org,
	devel@...verdev.osuosl.org, linux-kernel@...r.kernel.org
Subject: RE: [PATCH 1/5] staging: zsmalloc: zsmalloc memory allocation library

> From: Seth Jennings [mailto:sjenning@...ux.vnet.ibm.com]
> Subject: [PATCH 1/5] staging: zsmalloc: zsmalloc memory allocation library
> 
> From: Nitin Gupta <ngupta@...are.org>
> 
> This patch creates a new memory allocation library named
> zsmalloc.
> 
> +/*
> + * Allocate a zspage for the given size class
> + */
> +static struct page *alloc_zspage(struct size_class *class, gfp_t flags)
> +{
> +	int i, error;
> +	struct page *first_page = NULL;
> +
> +	/*
> +	 * Allocate individual pages and link them together as:
> +	 * 1. first page->private = first sub-page
> +	 * 2. all sub-pages are linked together using page->lru
> +	 * 3. each sub-page is linked to the first page using page->first_page
> +	 *
> +	 * For each size class, First/Head pages are linked together using
> +	 * page->lru. Also, we set PG_private to identify the first page
> +	 * (i.e. no other sub-page has this flag set) and PG_private_2 to
> +	 * identify the last page.
> +	 */
> +	error = -ENOMEM;
> +	for (i = 0; i < class->zspage_order; i++) {
> +		struct page *page, *prev_page;
> +
> +		page = alloc_page(flags);

Hmmm... I thought we agreed offlist that the new allocator API would
provide for either preloads or callbacks (which may differ per pool)
instead of directly allocating raw pages from the kernel.  The caller
(zcache or ramster or ???) needs to be able to somehow manage maximum
memory capacity to avoid OOMs.

Or am I missing the code that handles that?

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ