[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0905142224230.3561@localhost.localdomain>
Date: Thu, 14 May 2009 22:40:03 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
cc: LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dimitri Sivanich <sivanich@....com>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [patch 1/2] sched, timers: move calc_load() to scheduler
On Thu, 14 May 2009, Peter Zijlstra wrote:
> On Thu, 2009-05-14 at 11:21 +0000, Thomas Gleixner wrote:
> > plain text document attachment (move-calc-load-to-scheduler-v1.patch)
>
> > +/*
> > + * calc_load - update the avenrun load estimates 10 ticks after the
> > + * CPUs have updated calc_load_tasks.
> > + */
> > +void calc_global_load(void)
> > +{
> > + unsigned long upd = calc_load_update + 10;
> > + long active;
> > +
> > + if (time_before(jiffies, upd))
> > + return;
> >
> > + active = atomic_long_read(&calc_load_tasks);
> > + active = active > 0 ? active * FIXED_1 : 0;
> >
> > + avenrun[0] = calc_load(avenrun[0], EXP_1, active);
> > + avenrun[1] = calc_load(avenrun[1], EXP_5, active);
> > + avenrun[2] = calc_load(avenrun[2], EXP_15, active);
> > +
> > + calc_load_update += LOAD_FREQ;
> > +}
>
> > @@ -1211,7 +1160,8 @@ static inline void update_times(unsigned
> > void do_timer(unsigned long ticks)
> > {
> > jiffies_64 += ticks;
> > - update_times(ticks);
> > + update_wall_time();
> > + calc_global_load();
> > }
>
> I can see multiple cpus fall into calc_global_load() concurrently, which
> would 'age' the load faster than expected.
>
> Should we plug that hole?
They can't. do_timer() is called by exactly one CPU under xtime
lock. What we removed is the loop over all online CPUs to retrieve the
number of active tasks.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists