[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120709140622.GA26595@localhost>
Date: Mon, 9 Jul 2012 22:06:22 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: David Rientjes <rientjes@...gle.com>,
Pekka Enberg <penberg@...nel.org>,
JoonSoo Kim <js1304@...il.com>,
Vegard Nossum <vegard.nossum@...il.com>,
Christoph Lameter <cl@...ux.com>, Rus <rus@...nxsoft.com>,
Ben Hutchings <ben@...adent.org.uk>, stable@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [patch] mm, slub: ensure irqs are enabled for kmemcheck
On Mon, Jul 09, 2012 at 09:46:33AM -0400, Steven Rostedt wrote:
> On Mon, 2012-07-09 at 03:36 -0700, David Rientjes wrote:
> > kmemcheck_alloc_shadow() requires irqs to be enabled, so wait to disable
> > them until after its called for __GFP_WAIT allocations.
> >
> > This fixes a warning for such allocations:
> >
> > WARNING: at kernel/lockdep.c:2739 lockdep_trace_alloc+0x14e/0x1c0()
> >
> > Cc: stable@...r.kernel.org [3.1+]
> > Acked-by: Fengguang Wu <fengguang.wu@...el.com>
> > Tested-by: Fengguang Wu <fengguang.wu@...el.com>
> > Signed-off-by: David Rientjes <rientjes@...gle.com>
> > ---
> > mm/slub.c | 13 ++++++-------
> > 1 file changed, 6 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -1314,13 +1314,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
> > stat(s, ORDER_FALLBACK);
> > }
> >
> > - if (flags & __GFP_WAIT)
> > - local_irq_disable();
> > -
> > - if (!page)
> > - return NULL;
> > -
> > - if (kmemcheck_enabled
> > + if (page && kmemcheck_enabled
>
> One micro-optimization nit...
>
> If kmemcheck_enabled is mostly false, and page is mostly true, wouldn't
> it be better to swap the two?
>
> if (kmemcheck_enabled && page
>
> Then the first check would just short-circuit out and we don't do the
> double check.
I had the same gut feeling but at the time was not as conscious as you ;)
Now I can dig out a similar optimization by Andrew Morton which also
saves memory bytes:
On Tue, Jun 19, 2012 at 03:00:14PM -0700, Andrew Morton wrote:
: With my gcc and CONFIG_CGROUP_MEM_RES_CTLR=n (for gawd's sake can we
: please rename this to CONFIG_MEMCG?), this:
:
: --- a/mm/vmscan.c~memcg-prevent-from-oom-with-too-many-dirty-pages-fix
: +++ a/mm/vmscan.c
: @@ -726,8 +726,8 @@ static unsigned long shrink_page_list(st
: * writeback from reclaim and there is nothing else to
: * reclaim.
: */
: - if (PageReclaim(page)
: - && may_enter_fs && !global_reclaim(sc))
: + if (!global_reclaim(sc) && PageReclaim(page) &&
: + may_enter_fs)
: wait_on_page_writeback(page);
: else {
: nr_writeback++;
:
:
: reduces vmscan.o's .text by 48 bytes(!). Because the compiler can
: avoid generating any code for PageReclaim() and perhaps the
: may_enter_fs test. Because global_reclaim() evaluates to constant
: true. Do you think that's an improvement?
Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists