lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0705140847330.10442@schroedinger.engr.sgi.com>
Date:	Mon, 14 May 2007 08:49:26 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Thomas Graf <tgraf@...g.ch>,
	David Miller <davem@...emloft.net>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Daniel Phillips <phillips@...gle.com>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Matt Mackall <mpm@...enic.com>
Subject: Re: [PATCH 3/5] mm: slub allocation fairness

On Mon, 14 May 2007, Peter Zijlstra wrote:

> Index: linux-2.6-git/include/linux/slub_def.h
> ===================================================================
> --- linux-2.6-git.orig/include/linux/slub_def.h
> +++ linux-2.6-git/include/linux/slub_def.h
> @@ -52,6 +52,7 @@ struct kmem_cache {
>  	struct kmem_cache_node *node[MAX_NUMNODES];
>  #endif
>  	struct page *cpu_slab[NR_CPUS];
> +	int rank;
>  };

Ranks as part of the kmem_cache structure? I thought this is a temporary 
thing?

>   * Lock order:
> @@ -961,6 +962,8 @@ static struct page *allocate_slab(struct
>  	if (!page)
>  		return NULL;
>  
> +	s->rank = page->index;
> +

Argh.... Setting a cache structure field from a page struct field? What 
about concurrency?

> @@ -1371,8 +1376,12 @@ static void *__slab_alloc(struct kmem_ca
>  		gfp_t gfpflags, int node, void *addr, struct page *page)
>  {
>  	void **object;
> -	int cpu = smp_processor_id();
> +	int cpu;
> +
> +	if (page == FORCE_PAGE)
> +		goto force_new;
>  
> +	cpu = smp_processor_id();
>  	if (!page)
>  		goto new_slab;
>  
> @@ -1405,6 +1414,7 @@ have_slab:
>  		goto load_freelist;
>  	}
>  
> +force_new:
>  	page = new_slab(s, gfpflags, node);
>  	if (page) {
>  		cpu = smp_processor_id();
> @@ -1465,15 +1475,22 @@ static void __always_inline *slab_alloc(
>  	struct page *page;
>  	void **object;
>  	unsigned long flags;
> +	int rank = slab_alloc_rank(gfpflags);
>  
>  	local_irq_save(flags);
> +	if (slab_insufficient_rank(s, rank)) {
> +		page = FORCE_PAGE;
> +		goto force_alloc;
> +	}
> +
>  	page = s->cpu_slab[smp_processor_id()];
>  	if (unlikely(!page || !page->lockless_freelist ||
> -			(node != -1 && page_to_nid(page) != node)))
> +			(node != -1 && page_to_nid(page) != node))) {
>  
> +force_alloc:
>  		object = __slab_alloc(s, gfpflags, node, addr, page);
>  
> -	else {
> +	} else {
>  		object = page->lockless_freelist;
>  		page->lockless_freelist = object[page->offset];
>  	}

This is the hot path. No modifications please.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ