[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJZ5v0g6fPu5mhUgy9ADb7fo7Q_WngVcADewVY9Pii3R=SMzZg@mail.gmail.com>
Date: Wed, 18 Mar 2020 22:29:53 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Daniel Lezcano <daniel.lezcano@...aro.org>
Cc: "Rafael J. Wysocki" <rjw@...ysocki.net>,
Ulf Hansson <ulf.hansson@...aro.org>,
Linux PM <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Kevin Hilman <khilman@...nel.org>
Subject: Re: [PATCH RFC] cpuidle: consolidate calls to time capture
On Wed, Mar 18, 2020 at 3:32 PM Daniel Lezcano
<daniel.lezcano@...aro.org> wrote:
>
> On 18/03/2020 12:04, Daniel Lezcano wrote:
> >
> > Hi Rafael,
> >
> > On 18/03/2020 11:17, Rafael J. Wysocki wrote:
> >> On Monday, March 16, 2020 10:08:43 PM CET Daniel Lezcano wrote:
> >>> A few years ago, we changed the code in cpuidle to replace ktime_get()
> >>> by a local_clock() to get rid of potential seq lock in the path and an
> >>> extra latency.
> >>>
> >>> Meanwhile, the code evolved and we are getting the time in some other
> >>> places like the power domain governor and in the future break even
> >>> deadline proposal.
> >>
> >> Hmm?
> >>
> >> Have any patches been posted for that?
> >
> > https://lkml.org/lkml/2020/3/11/1113
> >
> > https://lkml.org/lkml/2020/3/13/466
> >
> > but there is no consensus yet if that has a benefit or not.
> >
> >>> Unfortunately, as the time must be compared across the CPU, we have no
> >>> other option than using the ktime_get() again. Hopefully, we can
> >>> factor out all the calls to local_clock() and ktime_get() into a
> >>> single one when the CPU is entering idle as the value will be reuse in
> >>> different places.
> >>
> >> So there are cases in which it is not necessary to synchronize the time
> >> between CPUs and those would take the overhead unnecessarily.
> >>
> >> This change looks premature to me at least.
> >
> > The idea is to call one time ktime_get() when entering idle and store
> > the result in the struct cpuidle_device, so we have the information when
> > we entered idle.
> >
> > Moreover, ktime_get() is called in do_idle() via:
> >
> > tick_nohz_idle_enter()
> > tick_nohz_start_idle()
> > ts->idle_entrytime = ktime_get();
> >
> > This is called at the first loop level. The idle loop is exiting and
> > re-entering again without passing through tick_nohz_idle_enter() in the
> > second loop level in case of interrupt processing, thus the
> > idle_entrytime is not updated and the return of
> > tick_nohz_get_sleep_length() will be greater than what is expected.
> >
> > May be we can consider ktime_get_mono_fast_ns() which is lockless with a
> > particular care of the non-monotonic aspect if needed. Given the
> > description at [1] the time jump could a few nanoseconds in case of NMI.
> >
> > The local_clock() can no be inspected across CPUs, the gap is too big
> > and continues to increase during system lifetime.
>
> I took the opportunity to measure the duration to a call to ktime_get,
> ktime_get_mono_fast_ns and local_clock.
The results you get depend a good deal on the conditions of the test,
the system on which they were obtained and so on. Without this
information it is hard to draw any conclusions from those results. In
particular, ktime_get() is not significantly slower than local_clock()
if there is no contention AFAICS, and the lack of contention cannot be
guaranteed here.
Generally speaking, the problem is that it is not sufficient to
measure the time before running the governor and after the CPU wakes
up, because in the cases that really care about the latency of that
operation the time needed to run the governor may be a significant
fraction of the entire overhead. So it is necessary to take time
stamps in several places and putting ktime_get() in all of them
doesn't sound particularly attractive.
Anyway, there is no real need to make this change AFAICS, so I'm not
really sure what the entire argument is about.
Powered by blists - more mailing lists