lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110328061214.GD31692@core.coreip.homeip.net>
Date:	Sun, 27 Mar 2011 23:12:14 -0700
From:	Dmitry Torokhov <dmitry.torokhov@...il.com>
To:	Jeffrey Brown <jeffbrown@...roid.com>
Cc:	linux-input@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] input: evdev: only wake poll on EV_SYN

First of all - please do not top post.

On Fri, Mar 25, 2011 at 04:03:18PM -0700, Jeffrey Brown wrote:
> It helps with every packet.  I have seen situations where user space
> somehow manages to read events faster than the driver enqueues them.
> 
> Pseudo-code basic processing loop:
> 
> struct input_event buffer[100];
> for (;;) {
>     poll(...);
>     count = read(fd, buffer, sizeof(buffer) / sizeof(buffer[0]));

I hope this is simply a typo in pseudo-code - read takes size in bytes,
not in number of structures.

>     process(buffer, count / sizeof(buffer[0]));
> }
> 
> I've seen cases on a dual-core ARM processor where instead of reading
> a block of 71 events all at once, it ends up reading 1 event after
> another 71 times.  CPU usage for the reading thread climbs to 35%
> whereas it should be less than 5%.
> 
> The problem is that poll() wakes up after the first event becomes
> available.  So the reader wakes up, promptly reads the event and goes
> back to sleep waiting for the next one.  Of course nothing useful
> happens until a SYN_REPORT arrives to complete the packet.

Unfortunately your change fixes only first packet, like I mentioned.
Consider the following scenario:

 - input core delivers events, we postpone waking up waiters
   till we get EV_SYN/SYN_REPORT;
 - userspace is waken and consumes entire packet;
 - in the meantime input core delivered 3 more events;
 - userpsace executes poll;
 - kernel adds the process to poll waiters list (poll_wait() call in
   evdev_poll();
 - evdev_poll() checks the condition, sees that there are events and
   signals that the data is ready even though we did not accumulate
   full event packet.

Hence your fix did not reliably fix the issue you are seeing.

> 
> Adding a usleep(100) after the poll() is enough to allow the driver
> time to finish writing the packet into the evdev ring buffer before
> the reader tries to read it.  In that case, we mostly read complete 71
> event packets although sometimes the 100us sleep isn't enough so we
> end up reading half a packet instead of the whole thing, eg. 28 events
> + 43 events.
> 
> Instead it would be better if the poll() didn't wake up until a
> complete packet is available for reading all at once.

Unfortunately poll() does not know the intent of userspace program -
will it try to consume the whole event or will it work in poll/read one
event/poll again mode. In this case you really do not want to delay
reading till next EV_SYN comes along.

We might entertain notion of not considering device readable unless
there is a sync event that has not been consumed, but this is
significant change in semantics and we need much more consideration.

> 
> Jeff.
> 
> On Fri, Mar 25, 2011 at 12:49 AM, Dmitry Torokhov
> <dmitry.torokhov@...il.com> wrote:
> > On Tue, Mar 22, 2011 at 06:04:04PM -0700, Jeff Brown wrote:
> >> On SMP systems, it is possible for an evdev client blocked on poll()
> >> to wake up and read events from the evdev ring buffer at the same
> >> rate as they are enqueued.  This can result in high CPU usage,
> >> particularly for MT devices, because the client ends up reading
> >> events one at a time instead of reading complete packets.  This patch
> >> ensures that the client only wakes from poll() when a complete packet
> >> is ready to be read.
> >
> > Doesn't this only help with very first packet after a pause in event
> > stream?
> >
> > --
> > Dmitry
> >

-- 
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ