lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 4 Apr 2011 15:46:11 -0700
From:	Dmitry Torokhov <dmitry.torokhov@...il.com>
To:	Jeffrey Brown <jeffbrown@...roid.com>
Cc:	rydberg@...omail.se, djkurtz@...gle.com,
	linux-input@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 4/4] input: evdev: Make device readable only when it
 contains a complete packet.

On Mon, Apr 04, 2011 at 03:16:13PM -0700, Jeffrey Brown wrote:
> Hi Dmitry,
> I don't think the new patch is completely correct.
> 
> On Mon, Apr 4, 2011 at 2:36 PM, Dmitry Torokhov
> <dmitry.torokhov@...il.com> wrote:
> > I think we should target SYN_REPORT directly. SYN_CONFIG is unused and
> > SYN_DROPPED is not interesting till next SYN_REPORT anyway. Given the
> > changes to the previous patch I have the following:
> 
> Explicitly checking for SYN_REPORT makes sense.  I wasn't sure to do
> with SYN_CONFIG before so I tried to keep the condition somewhat
> conservative.
> 
> Per previous comments on an older iteration of this patch, it probably
> makes sense to calculate this flag once in evdev_event and pass it to
> evdev_pass_event.
> 
> bool full_sync = (type == EV_SYN && code == SYN_REPORT);

I am not sure what is cheaper - 2 conditionals or stack manipulation
needed to push another argument if we happed to be register-starved.

> 
> > @@ -41,6 +41,7 @@ struct evdev {
> >  struct evdev_client {
> >        unsigned int head;
> >        unsigned int tail;
> > +       unsigned int last_syn;  /* position of the last EV_SYN/SYN_REPORT */
> 
> This comment for last_syn is not quite right.  We need last_syn to
> refer to the position just beyond the last sync.  Otherwise the device
> will not become readable until another event is written there.  The
> invariants for last_syn should be similar to those for head.

Hm, yes, comment is incorrect. Given this fact I do not like the name
anymore either (nor do I like 'end'). Need to think about something
better.

> 
> Whereas tail != head means buffer non-empty, tail != last_syn should
> mean buffer is readable.
> 
> It looks like we almost maintain those invariants here, except for SYN_DROPPED.
> 
> >        spinlock_t buffer_lock; /* protects access to buffer, head and tail */
> >        struct fasync_struct *fasync;
> >        struct evdev *evdev;
> > @@ -72,12 +73,16 @@ static void evdev_pass_event(struct evdev_client *client,
> >                client->buffer[client->tail].type = EV_SYN;
> >                client->buffer[client->tail].code = SYN_DROPPED;
> >                client->buffer[client->tail].value = 0;
> > +
> 
> Should use client->head here so that the SYN_DROPPED is readable.

It is readable, but we do not want to signal on it.

> 
> > +               client->last_syn = client->tail;
> >        }
> >
> >        spin_unlock(&client->buffer_lock);
> 
> Can use full_sync or something equivalent instead of repeating the
> condition on EV_SYN / SYN_REPORT here.
> 
> > -       if (event->type == EV_SYN)
> > +       if (event->type == EV_SYN && event->code == SYN_REPORT) {
> 
> I don't think it's safe to modify last_syn outside of the spin lock.
> This should be done above.

This is the only writer, plus we are running under event_lock with
interrupts off, so it is safe.

> 
> > +               client->last_syn = client->head;
> >                kill_fasync(&client->fasync, SIGIO, POLL_IN);
> > +       }
> >  }
> 
> MISSING: We need to also modify evdev_event to only call
> wake_up_interruptible when enqueuing a sync.  It does not make sense
> to wake up waiters unless the device is about to become readable
> again.

Right, I'll add it.

> 
> This also means we should wake after having written SYN_DROPPED.  We
> might need to make evdev_pass_event return (or take by reference) a
> boolean that indicates whether at least one client has become
> readable.

Why? Why would we not want to wait till the next SYNC to deliver
DROPPED?

> 
> Pseudo-code:
> 
> if (full_sync || evdev_became_readable_for_a_client_due_to_syn_dropped)
>     wake_up_interruptible(&evdev->wait);
> 
> >  /*
> > @@ -387,12 +392,12 @@ static ssize_t evdev_read(struct file *file, char __user *buffer,
> >        if (count < input_event_size())
> >                return -EINVAL;
> > -       if (client->head == client->tail && evdev->exist &&
> > +       if (client->last_syn == client->tail && evdev->exist &&
> >            (file->f_flags & O_NONBLOCK))
> >                return -EAGAIN;
> >
> >        retval = wait_event_interruptible(evdev->wait,
> > -               client->head != client->tail || !evdev->exist);
> > +               client->last_syn != client->tail || !evdev->exist);
> >        if (retval)
> >                return retval;
> >
> > @@ -421,7 +426,7 @@ static unsigned int evdev_poll(struct file *file, poll_table *wait)
> >        poll_wait(file, &evdev->wait, wait);
> >
> >        mask = evdev->exist ? POLLOUT | POLLWRNORM : POLLHUP | POLLERR;
> > -       if (client->head != client->tail)
> > +       if (client->last_syn != client->tail)
> >                mask |= POLLIN | POLLRDNORM;
> >
> >        return mask;
> 
> It looks to me like this patch isn't based on top of your previous
> patch for SYN_DROPPED.  Specifically, the SYN_DROPPED should be
> inserted before the newly enqueued event but I don't see that above.

Yes it does - please check the chunk for evdevPass_event again.

-- 
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ