[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ccbc3854-b513-34b5-b989-31e23e8540ac@nvidia.com>
Date: Tue, 23 Mar 2021 11:25:56 -0700
From: Dipen Patel <dipenp@...dia.com>
To: Thierry Reding <thierry.reding@...il.com>,
Linus Walleij <linus.walleij@...aro.org>
CC: Kent Gibson <warthog618@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jon Hunter <jonathanh@...dia.com>,
"Bartosz Golaszewski" <bgolaszewski@...libre.com>,
"open list:GPIO SUBSYSTEM" <linux-gpio@...r.kernel.org>,
linux-tegra <linux-tegra@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Arnd Bergmann <arnd@...db.de>,
"Richard Cochran" <richardcochran@...il.com>,
Marc Zyngier <maz@...nel.org>
Subject: Re: GTE - The hardware timestamping engine
On 3/23/21 3:06 AM, Thierry Reding wrote:
> On Tue, Mar 23, 2021 at 10:08:00AM +0100, Linus Walleij wrote:
>> On Mon, Mar 22, 2021 at 9:17 PM Dipen Patel <dipenp@...dia.com> wrote:
>>
>>> My follow-up concerns on both Linus's and Kent's feedback:
>>>
>>> 1. Please correct me if I am wrong, lineevent in the gpiolib* is only
>>> serves the userspace clients.
>>> 1.a What about kernel drivers wanting to use this feature for monitoring its
>>> GPIO lines, see gyroscope example somewhere below. In that regards,
>>> lineevent implementation is not sufficient.
>>> 1.b Are you also implying to extend lineevent implementation to kernel
>>> drivers?
>>
>> I was talking about lineevent because you mentioned things like
>> motors and robotics, and those things are traditionally not run in
>> kernelspace because they are not generic hardware that fit in the
>> kernel subsystems.
>>
>> Normally industrial automatic control tasks are run in a userspace
>> thread with some realtime priority.
>>
>> As Kent says, in-kernel events are exclusively using IRQ as
>> mechanism, and should be modeled as IRQs. Then the question
>> is how you join the timestamp with the IRQ. GPIO chips are
>> just some kind of irqchip in this regard, we reuse the irqchip
>> infrastructure in the kernel for all GPIO drivers that generate
>> "events" in response to state transitions on digital lines.
>
> One potential problem I see with this is that Kent's proposal, if I
> understand correctly, would supplant the original IRQ of a device with
> the GTE IRQ for the corresponding event. I'm not sure that's desirable
> because that would require modifying the device tree and would no longer
> accurately represent the hardware. Timestamping also sounds like
> something that drivers would want to opt into, and requiring people to
> update the device tree to achieve this just doesn't seem reasonable.
>
> This proposal would also only work if there's a 1:1 correspondence
> between hardware IRQ and GTE IRQ. However, as Dipen mentioned, the GTE
> events can be configured with a threshold, so a GTE IRQ might only
> trigger every, say, 5th hardware IRQ. I'm not sure if those are common
> use-cases, though.
>
> Obviously if we don't integrate this with IRQs directly, it becomes a
> bit more difficult to relate the captured timestamps to the events
> across subsystem boundaries. I'm not sure how this would be solved
> properly. If the events are sufficiently rare, and it's certain that
> none will be missed, then it should be possible to just pull a timestamp
> from the timestamp FIFO for each event.
>
Just to clarify, I am getting impression that GTE is viewed or made to be
viewed as "event" generating device, which it is not. You can consider GTE
as "person in a middle" type of device which can monitor configured events
and on seeing state change, it will just record timestamp and store it.
I agree with Thierry's point.
> All of that said, I wonder if perhaps hierarchical IRQ domains can
> somehow be used for this. We did something similar on Tegra not too long
> ago for wake events, which are basically IRQs exposed by a parent IRQ
> chip that allows waking up from system sleep. There are some
> similarities between that and GTE in that the wake events also map to a
> subset of GPIOs and IRQs and provide additional functionalities on top.
>
> I managed to mess up the implementation and Marc stepped in to clean
> things up, so Cc'ing him since he's clearly more familiar with the topic
> than I am.
>
> Thierry
>
Powered by blists - more mailing lists