lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 08 Apr 2013 08:32:52 -0400
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Joonsoo Kim <iamjoonsoo.kim@....com>
Cc:	Christoph Lameter <cl@...ux.com>,
	Paul Gortmaker <paul.gortmaker@...driver.com>,
	LKML <linux-kernel@...r.kernel.org>,
	RT <linux-rt-users@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Clark Williams <clark@...hat.com>,
	Pekka Enberg <penberg@...nel.org>
Subject: Re: [RT LATENCY] 249 microsecond latency caused by slub's
 unfreeze_partials() code.

On Tue, 2013-04-02 at 09:42 +0900, Joonsoo Kim wrote:

> > 
> > Signed-off-by: Christoph Lameter <cl@...ux.com>
> > 
> > Index: linux/mm/slub.c
> > ===================================================================
> > --- linux.orig/mm/slub.c	2013-03-28 12:14:26.958358688 -0500
> > +++ linux/mm/slub.c	2013-04-01 10:23:24.677584499 -0500
> > @@ -1498,6 +1498,7 @@ static inline void *acquire_slab(struct
> >  	void *freelist;
> >  	unsigned long counters;
> >  	struct page new;
> > +	unsigned long objects;
> > 
> >  	/*
> >  	 * Zap the freelist and set the frozen bit.
> > @@ -1507,6 +1508,7 @@ static inline void *acquire_slab(struct
> >  	freelist = page->freelist;
> >  	counters = page->counters;
> >  	new.counters = counters;
> > +	objects = page->inuse;
> >  	if (mode) {
> >  		new.inuse = page->objects;
> >  		new.freelist = NULL;
> > @@ -1524,6 +1526,7 @@ static inline void *acquire_slab(struct
> >  		return NULL;
> > 
> >  	remove_partial(n, page);
> > +	page->lru.next = (void *)objects;
> >  	WARN_ON(!freelist);
> >  	return freelist;
> >  }
> 
> Good. I like your method which use lru.next in order to hand over
> number of objects.

I hate it ;-)

It just seems to be something that's not very robust and can cause hours
of debugging in the future. I mean, there's not even a comment
explaining what is happening. The lru is a union with other slub
partials structs that is not very obvious. If something is out of order,
it can easily break, and there's nothing here that points to why.

Just pass the damn objects pointer by reference and use that. It's easy
to understand, read and is robust.

-- Steve

> 
> > @@ -1565,7 +1568,7 @@ static void *get_partial_node(struct kme
> >  			c->page = page;
> >  			stat(s, ALLOC_FROM_PARTIAL);
> >  			object = t;
> > -			available =  page->objects - page->inuse;
> > +			available =  page->objects - (unsigned long)page->lru.next;
> >  		} else {
> >  			available = put_cpu_partial(s, page, 0);
> >  			stat(s, CPU_PARTIAL_NODE);
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ