lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Nov 2017 15:35:38 +0530
From:   Sagar Arun Kamble <sagar.a.kamble@...el.com>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     John Stultz <john.stultz@...aro.org>,
        Stephen Boyd <sboyd@...eaurora.org>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
        linux-rdma@...r.kernel.org
Subject: Re: Creating cyclecounter and lock member in timecounter structure [
 Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with
 system time]



On 11/24/2017 7:01 PM, Thomas Gleixner wrote:
> On Fri, 24 Nov 2017, Sagar Arun Kamble wrote:
>> On 11/24/2017 12:29 AM, Thomas Gleixner wrote:
>>> On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
>>>> We needed inputs on possible optimization that can be done to
>>>> timecounter/cyclecounter structures/usage.
>>>> This mail is in response to review of patch
>>>> https://patchwork.freedesktop.org/patch/188448/.
>>>>
>>>> As Chris's observation below, about dozen of timecounter users in the
>>>> kernel
>>>> have below structures
>>>> defined individually:
>>>>
>>>> spinlock_t lock;
>>>> struct cyclecounter cc;
>>>> struct timecounter tc;
>>>>
>>>> Can we move lock and cc to tc? That way it will be convenient.
>>>> Also it will allow unifying the locking/overflow watchdog handling across
>>>> all
>>>> drivers.
>>> Looks like none of the timecounter usage sites has a real need to separate
>>> timecounter and cyclecounter.
>> Yes. Will share patch for this change.
>>
>>> The lock is a different question. The locking of the various drivers
>>> differs and I have no idea how you want to handle that. Just sticking the
>>> lock into the datastructure and then not making use of it in the
>>> timercounter code and leave it to the callsites does not make sense.
>> Most of the locks are held around timecounter_read. In some instances it
>> is held when cyclecounter is updated standalone or is updated along with
>> timecounter calls.  Was thinking if we move the lock in timecounter
>> functions, drivers just have to do locking around its operations on
>> cyclecounter. But then another problem I see is there are variation of
>> locking calls like lock_irqsave, lock_bh, write_lock_irqsave (some using
>> rwlock_t). Should this all locking be left to driver only then?
> You could have the lock in the struct and protect the inner workings in the
> related core functions.
>
> That might remove locking requirements from some of the callers and the
> others still have their own thing around it.

For drivers having static/fixed cyclecounter, we can rely only on lock inside timecounter.
Most of the network drivers update cyclecounter at runtime and they will have to rely on two locks if
we add one to timecounter. This may not be efficient for them. Also the lock in timecounter has to be less restrictive (may be seqlock) I guess.

Cc'd Mellanox list for inputs on this.

I have started feeling that the current approach of drivers managing the locks is the right one so better leave the
lock out of timecounter.

> Thanks,
>
> 	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ