lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 9 Jun 2020 10:03:42 +0200
From:   Bartosz Golaszewski <brgl@...ev.pl>
To:     Kent Gibson <warthog618@...il.com>
Cc:     Bartosz Golaszewski <bgolaszewski@...libre.com>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-gpio <linux-gpio@...r.kernel.org>,
        Linus Walleij <linus.walleij@...aro.org>
Subject: Re: [RFC PATCH] gpio: uapi: v2 proposal

sob., 6 cze 2020 o 03:56 Kent Gibson <warthog618@...il.com> napisaƂ(a):
>

[snip!]

> >
> > I'd say yes - consolidation and reuse of data structures is always
> > good and normally they are going to be wrapped in some kind of
> > low-level user-space library anyway.
> >
>
> Ok, and I've changed the values field name to bitmap, along with the change
> to a bitmap type, so the stuttering is gone.
>
> And, as the change to bitmap substantially reduced the size of
> gpioline_config, I now embed that in the gpioline_info instead of
> duplicating all the other fields.  The values field will be zeroed
> when returned within info.
>

Could you post an example, I'm not sure I follow.

> > > And I've renamed "default_values" to just "values" in my latest draft
> > > which doesn't help with the stuttering.
> > >
> >
> > Why though? Aren't these always default values for output?
> >
>
> To me "default" implies a fallback value, and that de-emphasises the
> fact that the lines will be immediately set to those values as they
> are switched to outputs.
> These are the values the outputs will take - the "default" doesn't add
> anything.
>

Fair enough, values it is.

[snip!]

> > >
> > > I'm also kicking around the idea of adding sequence numbers to events,
> > > one per line and one per handle, so userspace can more easily detect
> > > mis-ordering or buffer overflows.  Does that make any sense?
> > >
> >
> > Hmm, now that you mention it - and in the light of the recent post by
> > Ryan Lovelett about polling precision - I think it makes sense to have
> > this. Especially since it's very easy to add.
> >
>
> OK.  I was only thinking about the edge events, but you might want it
> for your line info events on the chip fd as well?
>

I don't see the need for it now, but you never know. Let's leave it
out for now and if we ever need it - we now have the appropriate
padding.

> > > And would it be useful for userspace to be able to influence the size of
> > > the event buffer (currently fixed at 16 events per line)?
> > >
> >
> > Good question. I would prefer to not overdo it though. The event
> > request would need to contain the desired kfifo size and we'd only
> > allow to set it on request, right?
> >
>
> Yeah, it would only be relevant if edge detection was set and, as per
> edge detection itself, would only be settable via the request, not
> via set_config.  It would only be a suggestion, as the kfifo size gets
> rounded up to a power of 2 anyway.  It would be capped - I'm open to
> suggestions for a suitable max value.  And the 0 value would mean use
> the default - currently 16 per line.
>

This sounds good. How about 512 for max value for now and we can
always increase it if needed. I don't think we should explicitly cap
it though - let the user specify any value and just silently limit it
to 512 in the kernel.

> If you want the equivalent for the info watch then I'm not sure where to
> hook it in.  It should be at the chip scope, and there isn't any
> suitable ioctl to hook it into so it would need a new one - maybe a
> set_config for the chip?  But the buffer size would only be settable up
> until you add a watch.
>

I don't think we need this. Status changes are naturally much less
frequent and the potential for buffer overflow is miniscule here.

Bart

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ