[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091008080936.5f3b0e1b@infradead.org>
Date: Thu, 8 Oct 2009 08:09:36 -0700
From: Arjan van de Ven <arjan@...radead.org>
To: Frans Pop <elendil@...net.nl>
Cc: Mike Galbraith <efault@....de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
linux-wireless@...r.kernel.org
Subject: Re: [.32-rc3] scheduler: iwlagn consistently high in "waiting for
CPU"
On Thu, 8 Oct 2009 16:55:36 +0200
Frans Pop <elendil@...net.nl> wrote:
> > It turns out that on x86, these two 'opportunistic' timers only
> > get checked when another "real" timer happens.
> > These opportunistic timers have the objective to save power by
> > hitchhiking on other wakeups, as to avoid CPU wakeups by themselves
> > as much as possible.
>
> This patch makes quite a difference for me. iwlagn and phy0 now
> consistently show at ~10 ms or lower.\
most excellent
> I do still get occasional high latencies, but those are for things
> like "[rpc_wait_bit_killable]" or "Writing a page to disk", where I
> guess you'd expect them. Those high latencies are mostly only listed
> for "Global" and don't translate to individual processes.
and they're very different types of latencies, caused by disk and such.
> The ~10 ms I still get for iwlagn and phy0 (and sometimes higher (~30
> ms) for others like Xorg and artsd) is still "Scheduler: waiting for
> cpu'. If it is actually due to (un)interuptable sleep, isn't that a
> misleading label? I directly associated that with scheduler latency.
it's actually the time between wakeup and running, as measured by
scheduler statistics
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists