[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090119234059.GA452@elte.hu>
Date: Tue, 20 Jan 2009 00:40:59 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Steven Rostedt <srostedt@...hat.com>, Mike Travis <travis@....com>,
Rusty Russell <rusty@...tcorp.com.au>
Cc: Chris Mason <chris.mason@...cle.com>,
"Ma, Chinang" <chinang.ma@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <matthew@....cx>,
"Wilcox, Matthew R" <matthew.r.wilcox@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Tripathi, Sharad C" <sharad.c.tripathi@...el.com>,
"arjan@...ux.intel.com" <arjan@...ux.intel.com>,
"Kleen, Andi" <andi.kleen@...el.com>,
"Siddha, Suresh B" <suresh.b.siddha@...el.com>,
"Chilukuri, Harita" <harita.chilukuri@...el.com>,
"Styner, Douglas W" <douglas.w.styner@...el.com>,
"Wang, Peter Xihong" <peter.xihong.wang@...el.com>,
"Nueckel, Hubert" <hubert.nueckel@...el.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
Andrew Vasquez <andrew.vasquez@...gic.com>,
Anirban Chakraborty <anirban.chakraborty@...gic.com>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Gregory Haskins <ghaskins@...ell.com>,
Rusty Russell <rusty@...tcorp.com.au>
Subject: Re: Mainline kernel OLTP performance update
* Steven Rostedt <srostedt@...hat.com> wrote:
> (added Rusty)
>
> On Mon, 2009-01-19 at 13:04 -0500, Chris Mason wrote:
> > On Thu, 2009-01-15 at 00:11 -0700, Ma, Chinang wrote:
> > > >> > > > >
> > > >> > > > > Linux OLTP Performance summary
> > > >> > > > > Kernel# Speedup(x) Intr/s CtxSw/s us% sys% idle%
> > > >iowait%
> > > >> > > > > 2.6.24.2 1.000 21969 43425 76 24 0
> > > >0
> > > >> > > > > 2.6.27.2 0.973 30402 43523 74 25 0
> > > >1
> > > >> > > > > 2.6.29-rc1 0.965 30331 41970 74 26 0
> > > >0
> > > >> >
> > > >> > > But the interrupt rate went through the roof.
> > > >> >
> > > >> > Yes. I forget why that was; I'll have to dig through my archives for
> > > >> > that.
> > > >>
> > > >> Oh. I'd have thought that this alone could account for 3.5%.
> >
> > A later email indicated the reschedule interrupt count doubled since
> > 2.6.24, and so I poked around a bit at the causes of resched_task.
> >
> > I think the -rt version of check_preempt_equal_prio has gotten much more
> > expensive since 2.6.24.
> >
> > I'm sure these changes were made for good reasons, and this workload may
> > not be a good reason to change it back. But, what does the patch below
> > do to performance on 2.6.29-rcX?
> >
> > -chris
> >
> > diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
> > index 954e1a8..bbe3492 100644
> > --- a/kernel/sched_rt.c
> > +++ b/kernel/sched_rt.c
> > @@ -842,6 +842,7 @@ static void check_preempt_curr_rt(struct rq *rq,
> > struct task_struct *p, int sync
> > resched_task(rq->curr);
> > return;
> > }
> > + return;
> >
> > #ifdef CONFIG_SMP
> > /*
>
> That should not cause much of a problem if the scheduling task is not
> pinned to an CPU. But!!!!!
>
> A recent change makes it expensive:
>
> commit 24600ce89a819a8f2fb4fd69fd777218a82ade20
> Author: Rusty Russell <rusty@...tcorp.com.au>
> Date: Tue Nov 25 02:35:13 2008 +1030
>
> sched: convert check_preempt_equal_prio to cpumask_var_t.
>
> Impact: stack reduction for large NR_CPUS
>
>
>
> which has:
>
> static void check_preempt_equal_prio(struct rq *rq, struct task_struct
> *p)
> {
> - cpumask_t mask;
> + cpumask_var_t mask;
>
> if (rq->curr->rt.nr_cpus_allowed == 1)
> return;
>
> - if (p->rt.nr_cpus_allowed != 1
> - && cpupri_find(&rq->rd->cpupri, p, &mask))
> + if (!alloc_cpumask_var(&mask, GFP_ATOMIC))
> return;
>
>
>
>
> check_preempt_equal_prio is in a scheduling hot path!!!!!
>
> WTF are we allocating there for?
Agreed - this needs to be fixed. Since this runs under the runqueue lock
we can have a temporary cpumask in the runqueue itself, not on the stack.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists