[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMRc=McpHbGvCgKUcV40bUZ1mxe9YzaBmTiS0+afoe-V0CN4VQ@mail.gmail.com>
Date: Tue, 3 Dec 2019 09:58:44 +0100
From: Bartosz Golaszewski <brgl@...ev.pl>
To: Kent Gibson <warthog618@...il.com>
Cc: Bartosz Golaszewski <bgolaszewski@...libre.com>,
Linus Walleij <linus.walleij@...aro.org>,
"open list:GPIO SUBSYSTEM" <linux-gpio@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 7/8] gpiolib: add new ioctl() for monitoring changes in
line info
wt., 3 gru 2019 o 03:09 Kent Gibson <warthog618@...il.com> napisał(a):
>
> On Mon, Dec 02, 2019 at 06:11:06PM +0100, Bartosz Golaszewski wrote:
> > pon., 2 gru 2019 o 00:43 Kent Gibson <warthog618@...il.com> napisał(a):
> > >
> >
> > [snip!]
> >
> > > > > >
> > > > > > How about reusing the already existing file descriptor associated with
> > > > > > the chip itself? We currently only implement the ioctl() operation on
> > > > > > it - the poll() and read() callbacks are empty.
> > > > > >
> > > > > > We'd need to add two new ioctls(): GPIOLINE_WATCH_IOCTL and
> > > > > > GPIOLINE_UNWATCH_IOCTL. The structures for both would look like this:
> > > > > >
> > > > > > struct gpioline_watch_request {
> > > > > > __u32 lineoffset
> > > > > > struct gpioline_info info;
> > > > > > };
> > > > > >
> > > > > > struct gpioline_unwatch_request {
> > > > > > __u32 lineoffset;
> > > > > > };
> > > > > >
> > > > > > When GPIOLINE_WATCH_IOCTL is called, we'd setup a watch for given
> > > > > > line: the embedded gpioline_info structure would be filled with
> > > > > > initial info and we can now poll the gpiochip descriptor for events
> > > > > > and read them. The event structure would looks like this:
> > > > > >
> > > > > > struct gpioline_changed {
> > > > > > __u32 event_type;
> > > > > > __u64 timestamp;
> > > > > > struct gpioline_info info;
> > > > > > };
> > > > > >
> > > > > > Calling GPIOLINE_UNWATCH_IOCTL would of course make the kernel stop
> > > > > > emitting events for given line.
> > > > > >
> > > > > > Does it make sense?
> > > > > >
> > > > >
> > > > > That makes sense. But it doesn't really address the underlying problem
> > > > > that you have identified - it just makes it less likely that you will
> > > > > fill the kfifo.
> > > > >
> > > > > Correct me if I'm wrong, but a pathological process or processes could
> > > > > still swamp your kfifo with events, particularly if they are operating
> > > > > on bulks.
> > > >
> > > > Don't get me wrong - the assumption is that a process knows what it's
> > > > doing. We expect that if it watches lines for events, then it will
> > > > actually read them as soon as they arrive on the other end of the
> > > > FIFO. If not - then this won't affect others, it will fill up the FIFO
> > > > associated with this process' file descriptor and we'll just drop new
> > > > events until it consumes old ones. In other words: I'm not worried
> > > > about pathological processes.
> > > >
> > >
> > > The reader can't guarantee that it can read faster than changes can occur,
> > > no matter how well intentioned it is.
> > >
> > > I am a bit worried if you just drop events, as there is no indication to
> > > userspace that that has occured.
> >
> > This is what happens now with line events anyway. I added a patch to
> > the v2 of this series that adds a ratelimited debug message when the
> > kfifo is full. At least that will leave a trace in the kernel log.
> > Unfortunately there's no other way than limiting the FIFO's size -
> > otherwise a malicious process could hog all the memory by not reading
> > events.
> >
> > >
> > > > The problem here is that the file descriptor is created and there are
> > > > already several (up to 64) events waiting to be read. This just
> > > > doesn't feel right. If the process doesn't manage to consume all
> > > > initial events in time, we'll drop new ones. The alternative is to
> > > > allocate a larger FIFO but I just have a feeling that this whole
> > > > approach is wrong. I'm not sure any other subsystem does something
> > > > like this.
> > > >
> > > > >
> > > > > I'd be happier with a solution that addresses what happens when the
> > > > > kfifo is full, or even better prevents it from filling, and then see
> > > > > how that feeds back to the startup case.
> > > > >
> > > >
> > > > You can't really prevent it from overflowing as you can't
> > > > update/modify elements in the middle.
> > > >
> > >
> > > You can if you let go of the idea of adding something to the fifo for
> > > every change. If you know the change you want to signal is already in
> > > the fifo then you don't need to add it again.
> > >
> > > The idea I'm suggesting now is for the fifo to contain "your info on
> > > line X is stale" messages. If that hasn't been ACKed by userspace then
> > > there is no point adding another for that line. So worst case you have
> > > num_lines messages in the fifo at any time.
> >
> > I see. But in this case I'm not sure a file descriptor is the right
> > interface. When POLLIN events are detected by poll() called on an fd -
> > it means there's data to read on the file descriptor: there's data
> > already in the FIFO waiting to be consumed by the user-space.
> >
>
> Agree with file descriptors not being ideal for this, but what other
> options are there?
>
> > Let's imagine the following situation: we detect one of the conditions
> > for emitting the event in the kernel, we set the "needs_update" flag,
> > we then wake up the polling process, but it calls the LINE_INFO
> > ioctl() on the changed line without ever reading the event from the
> > fd. What would happen now? Does the unread event disappear from the fd
> > because the user "acked" the event? What about ordering of events when
> > line X gets updated, then line Y, then X again but the process didn't
> > read the first event?
> >
>
> The unread event can't disappear from the fifo. The fifo is write only
> from the kernel side, right?
>
> You are right that things don't go well if userspace doesn't strictly
> follow the read from fd then LINEINFO ioctl ordering.
>
> So probably best to keep things simple.
>
> And we should accept that overflows may occur. As that would leave
> userspace with stale info, userspace should poll the LINEINFO ioctl
> occassionally to check that it is still in sync.
>
> > IIRC the way the line events are handled in sysfs (polling
> > 'gpioXYZ/value', while 'gpioXYZ/value' doesn't work as a FIFO) was
> > criticized for its unreliability and was one of the reasons for
> > developing the chardev.
> >
>
> Tarring it with the sysfs brush is a bit harsh!
> You are comparing apples and oranges.
> In the sysfs case the problem was losing events.
> In this case losing events is not critical.
>
> > I would be much happier with your previous proposal: getting line_info
> > when setting the watch and then getting it all again every time the
> > status changes. We also get the "history" of changes that way.
> >
>
> I believe the previous proposal was yours - adding watch and unwatch
> ioctls to the chip fd.
The idea was yours, the concrete proposal was mine. :)
I'll try to prepare a v2 and let's discuss the code again.
Bart
>
> Kent.
>
> </snip>
Powered by blists - more mailing lists