[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKohpon_kUMQxtgTipcGm8qy35_C7_Y87KV1ohbexbeqwYSy5A@mail.gmail.com>
Date: Tue, 27 Nov 2012 11:55:19 +0530
From: Viresh Kumar <viresh.kumar@...aro.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: pjt@...gle.com, paul.mckenney@...aro.org, tglx@...utronix.de,
tj@...nel.org, suresh.b.siddha@...el.com, venki@...gle.com,
mingo@...hat.com, peterz@...radead.org, Arvind.Chauhan@....com,
linaro-dev@...ts.linaro.org, patches@...aro.org,
pdsw-power-team@....com, linux-kernel@...r.kernel.org,
linux-rt-users@...r.kernel.org
Subject: Re: [PATCH V2 Resend 0/4] Create sched_select_cpu() and use it for
workqueues and timers
Hi Steven,
Thanks for sharing your opinion. :)
As, this went out to be a long thread of discussion (thanks Paul), i will try to
answer everything here.
On 26 November 2012 22:10, Steven Rostedt <rostedt@...dmis.org> wrote:
> This is a really bad time of year to post new patches :-/
> A lot of people are trying to get their own work done by year end and
> then there's holidays and such that are also distractions. Not to
> mention that a new merge window will be opening soon.
Patches are there since End of September and it was just a ping now (actually
with bad timing - merge window).
> As workqueues are set off by the CPU that queued it, what real benefit
> does this give? A CPU was active when it queued the work and the work
> should be done before it gets back to sleep.
>
> OK, an interrupt happens on an idle CPU and queues some work. That work
> should execute before the CPU gets back to sleep, right? I fail to see
> the benefit of trying to move that work elsewhere. The CPU had to wake
> up to execute the interrupt. It's no longer in a deep sleep (or any
> sleep for that matter).
>
> To me it seems best to avoid waking up an idle CPU in the first place.
>
> I'm still working off a turkey overdose, so maybe I'm missing something
> obvious.
Ok, here is the story behind these patches. The idea was first discussed
by Vincent in LPC this year:
http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/08/lpc2012-sched-timer-workqueue.pdf
Specifically slides: 12 & 19.
cpu idleness here means idleness from scheduler's perspective, i.e.
For the scheduler CPU is idle, if all below are true:
- current task is idle task
- nr_running == 0
- wake_list is empty
There are two use cases of workqueue patch 3/4:
- queue-work from timer interrupt, which re-arms itself (slide 12):
Whether timer is deferred or not, it will queue the work on current
cpu once cpu
wakes up. The cpu could have immediately gone back to idle state again, if the
work is not queued on it. So, because this cpu is idle (from
schedulers point of
view), we can move this work to other cpu.
- delayed-work, which rearm's itself (slide 19):
Again the same thing, we could have kept the cpu in idle state for
some more time.
There might not be many users with this behavior, but a single user can actually
have significant impact.
For now, it doesn't take care of big LITTLE stuff that Paul pointed
out, but yes that
is in plan. Some work is going on in that direction too:
http://linux.kernel.narkive.com/mCyvFVUX/rfc-0-6-sched-packing-small-tasks
The very first reason of having this patchset was to have a single
preferred_cpu()
routine, which can be used by all frameworks. Timers being the first
user (already),
workqueues tried to be the second one.
I tested it more from functionality point of view rather than with
power figures :(
And i understood, that it is very much required.
Having said that, I believe all the questions raised are on PATCH 3/4
(workqueue).
And other 3 patches should be fine. Can you share your opinion on those patches,
I will then split this patchset and send workqueue stuff after doing some power
measurements later.
--
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists