lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210323052221.GA36246@sol>
Date:   Tue, 23 Mar 2021 13:22:21 +0800
From:   Kent Gibson <warthog618@...il.com>
To:     Dipen Patel <dipenp@...dia.com>
Cc:     Linus Walleij <linus.walleij@...aro.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "thierry.reding@...il.com" <thierry.reding@...il.com>,
        Jon Hunter <jonathanh@...dia.com>,
        Bartosz Golaszewski <bgolaszewski@...libre.com>,
        "open list:GPIO SUBSYSTEM" <linux-gpio@...r.kernel.org>,
        linux-tegra <linux-tegra@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Arnd Bergmann <arnd@...db.de>,
        Richard Cochran <richardcochran@...il.com>
Subject: Re: GTE - The hardware timestamping engine

On Mon, Mar 22, 2021 at 09:09:50PM -0700, Dipen Patel wrote:
> 
> 
> On 3/22/21 7:59 PM, Kent Gibson wrote:
> > On Mon, Mar 22, 2021 at 06:53:10PM -0700, Dipen Patel wrote:
> >>
> >>
> >> On 3/22/21 5:32 PM, Kent Gibson wrote:
> >>> On Mon, Mar 22, 2021 at 01:21:46PM -0700, Dipen Patel wrote:
> >>>> Hi Linus and Kent,
> >>>>
> > 
> > [snip]
> > 
> >>> In response to all your comments above...
> >>>
> >>> Firstly, I'm not suggesting that other kernel modules would use the
> >>> cdev lineevents, only that they would use the same mechanism that
> >>> gpiolib-cdev would use to timestamp the lineevents for userspace.
> >>>
> >> Sure, I just wanted to mention the different scenarios and wanted to know
> >> how can we fit all those together. Having said that, shouldn't this serve
> >> an opportunity to extend the linevent framework to accommodate kernel
> >> drivers as a clients?
> >>
> >> If we can't, then there is a risk of duplicating lineevent mechanism in all
> >> of those kernel drivers or at least in GTE framework/infrastructure as far
> >> as GPIO related GTE part is concerned.
> >>  
> > 
> > In-kernel the lineevents are just IRQs so anything needing a "lineevent"
> > can request the IRQ directly.  Or am I missing something?
> > 
> 
> In the GPIO context, I meant we can extend line_event_timestamp to kernel
> drivers as well in that way, both userspace and kernel drivers requesting
> particular GPIO for the hardware timestamp would be managed by same
> lineevent_* infrastructure from the gpiolib. Something like lineevent_create
> version of the kernel drivers, so if they need hardware timestamp on the
> GPIO line, they can request with some flags. In that way, GTE can leverage
> linevent* codes from gpiolib to cover its both the GPIO related use cases i.e.
> userspace app and kernel drivers.
> 

I still don't see what that gives you that is better than an IRQ and a
function to provide the timestamp for that IRQ.  What specific features
of a lineevent are you after?

The gpiolib-cdev code is there to provide a palettable API for userspace,
and the bulk of that code is specific to the userspace API.
Reusing that code for clients within the kernel is just introducing
pointless overhead when they can get what they need more directly.

There may be a case for some additional gpiolib/irq helper functions, but
I don't see gpiolib-cdev as a good fit for that role.

> >>> As to that mechanism, my current thinking is that the approach of
> >>> associating GTE event FIFO entries with particular physical IRQ events is
> >>> problematic, as keeping the two in sync would be difficult, if not
> >>> impossible.
> >>>
> >>> A more robust approach is to ignore the physical IRQs and instead service
> >>> the GTE event FIFO, generating IRQs from those events in software -
> >>> essentially a pre-timestamped IRQ.  The IRQ framework could provide the
> >>> timestamping functionality, equivalent to line_event_timestamp(), for
> >>> the IRQ handler/thread and in this case provide the timestamp from the GTE
> >>> event.
> >>>
> >>
> >> I have not fully understood above two paragraphs (except about
> >> lineevent_event_timestamp related part).
> >>
> >> I have no idea what it means to "ignore the physical IRQs and service the
> >> GTE event FIFO". Just like GPIO clients, there could be IRQ clients which
> >> want to monitor certain IRQ line, like ethernet driver wanted to retrieve
> >> timestamp for its IRQ line and so on.
> >>
> > 
> > I mean that in the IRQ framework, rather than enabling the physical IRQ
> > line it would leave that masked and would instead enable the FIFO line to
> > service the FIFO, configure the GTE to generate the events for that
> > line, and then generate IRQs in response to the FIFO events.
> > That way the client requesting the IRQ is guaranteed to only receive an
> > IRQ that corresponds to a GTE FIFO event and the timestamp stored in the
> > IRQ framework would match.
> > 
> 
> I do not think we need to do such things, for example, below is
> the rough sequence how GTE can notify its clients be it GPIO or IRQ
> lines. I believe this will also help understand better ongoing GPIO
> discussions.
> 
> 1. Configure GTE FIFO watermark or threshold, lets assume 1, i.e
>    generate GTE interrupt when FIFO depth is 1.
> 2. In the GTE ISR or ISR thread, drain internal FIFO entries
> 3. Through GTE driver's internal mapping, identify which IRQ or
>    GPIO number this entry belongs to. (This is possible as GTE
>    has predefined bits for each supported signals, for example GTE
>    supports 40 GPIOs and 352 IRQ lines, and it has multliple GTE instances
>    which can take care all of them)
> 4. GTE driver pushes the event data (in this case it will be timestamp and
>    direction of the event ie.rising or falling) to the GTE generic framework
> 5. GTE framework will store per event data to its per client/event sw FIFO
> 6. wake up any sleeping client thread
> 7. Points 3 to 6 are happening in GTE ISR context. 
> 8. gte_retrieve_event (which can block if no event) at later convenient
>    time do whatever it wants with it. We can extend it to non blocking
>    version where some sort of client callbacks can be implemented.
> 

Don't see where that conflicts with my suggestion except that I moved
everything, other than the hardware specific configuration, under the IRQ
umbrella as a lot of the functionality, in the abstract, aligns with IRQ.

If you want to reduce duplication shouldn't GTE be integrated into IRQ
rather than providing a new service?

As I see it, GTE is just another event source for IRQ, and I'd be
exploring that path. But that is just my view.

Cheers,
Kent.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ