[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080623130737.489498dc@infradead.org>
Date: Mon, 23 Jun 2008 13:07:37 -0700
From: Arjan van de Ven <arjan@...radead.org>
To: Darren Hart <dvhltc@...ibm.com>
Cc: paulmck@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
mingo@...e.hu, josh@...edesktop.org, niv@...ibm.com,
dino@...ibm.com, akpm@...ux-foundation.org,
torvalds@...ux-foundation.org, vegard.nossum@...il.com,
adobriyan@...il.com, oleg@...sign.ru, bunk@...nel.org, rjw@...k.pl
Subject: Re: [PATCH -tip-rcu] Make rcutorture more vicious: make quiescent
rcutorture less power-hungry
On Mon, 23 Jun 2008 20:02:54 +0000
Darren Hart <dvhltc@...ibm.com> wrote:
> On Mon, 2008-06-23 at 11:07 -0700, Arjan van de Ven wrote:
> > On Mon, 23 Jun 2008 17:54:09 +0000
> > Darren Hart <dvhltc@...ibm.com> wrote:
> >
> > > I'm a little concerned about how this will affect real-time
> > > performance, as queueing up lots of timers all at once can lead to
> > > long running timer expiration handlers. If just a
> > > schedule_timeout, I suppose we are only looking at a process
> > > wakeup, as opposed to a softirq context callback function?
> >
> > in reality, the time it takes to deliver the interrupt (including
> > waking the CPU up etc), is likely to be an order or two of magnitude
> > higher than this kind of code loop....
>
> Sure, if we just look at one of them. Any idea how many such items
> we're looking at rounding up to fire at the same time? Is it dozens,
> hundreds, thousands?
>
so far, in practice, it's still single-digit amounts.
There's not that many timers that run normally.
(based on powertop data from many systems)
--
If you want to reach me at my work email, use arjan@...ux.intel.com
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists