[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1711231947280.2364@nanos>
Date: Thu, 23 Nov 2017 19:59:40 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Sagar Arun Kamble <sagar.a.kamble@...el.com>
cc: John Stultz <john.stultz@...aro.org>,
Stephen Boyd <sboyd@...eaurora.org>,
Chris Wilson <chris@...is-wilson.co.uk>,
linux-kernel@...r.kernel.org
Subject: Re: Creating cyclecounter and lock member in timecounter structure
[ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp
with system time]
On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
> We needed inputs on possible optimization that can be done to
> timecounter/cyclecounter structures/usage.
> This mail is in response to review of patch
> https://patchwork.freedesktop.org/patch/188448/.
>
> As Chris's observation below, about dozen of timecounter users in the kernel
> have below structures
> defined individually:
>
> spinlock_t lock;
> struct cyclecounter cc;
> struct timecounter tc;
>
> Can we move lock and cc to tc? That way it will be convenient.
> Also it will allow unifying the locking/overflow watchdog handling across all
> drivers.
Looks like none of the timecounter usage sites has a real need to separate
timecounter and cyclecounter.
The lock is a different question. The locking of the various drivers
differs and I have no idea how you want to handle that. Just sticking the
lock into the datastructure and then not making use of it in the
timercounter code and leave it to the callsites does not make sense.
Thanks,
tglx
Powered by blists - more mailing lists