lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTikTOEimVER8eWT7yTUe7+4QS5epQWUAnoJD=B+=@mail.gmail.com>
Date:	Fri, 25 Mar 2011 16:03:18 -0700
From:	Jeffrey Brown <jeffbrown@...roid.com>
To:	Dmitry Torokhov <dmitry.torokhov@...il.com>
Cc:	linux-input@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] input: evdev: only wake poll on EV_SYN

It helps with every packet.  I have seen situations where user space
somehow manages to read events faster than the driver enqueues them.

Pseudo-code basic processing loop:

struct input_event buffer[100];
for (;;) {
    poll(...);
    count = read(fd, buffer, sizeof(buffer) / sizeof(buffer[0]));
    process(buffer, count / sizeof(buffer[0]));
}

I've seen cases on a dual-core ARM processor where instead of reading
a block of 71 events all at once, it ends up reading 1 event after
another 71 times.  CPU usage for the reading thread climbs to 35%
whereas it should be less than 5%.

The problem is that poll() wakes up after the first event becomes
available.  So the reader wakes up, promptly reads the event and goes
back to sleep waiting for the next one.  Of course nothing useful
happens until a SYN_REPORT arrives to complete the packet.

Adding a usleep(100) after the poll() is enough to allow the driver
time to finish writing the packet into the evdev ring buffer before
the reader tries to read it.  In that case, we mostly read complete 71
event packets although sometimes the 100us sleep isn't enough so we
end up reading half a packet instead of the whole thing, eg. 28 events
+ 43 events.

Instead it would be better if the poll() didn't wake up until a
complete packet is available for reading all at once.

Jeff.

On Fri, Mar 25, 2011 at 12:49 AM, Dmitry Torokhov
<dmitry.torokhov@...il.com> wrote:
> On Tue, Mar 22, 2011 at 06:04:04PM -0700, Jeff Brown wrote:
>> On SMP systems, it is possible for an evdev client blocked on poll()
>> to wake up and read events from the evdev ring buffer at the same
>> rate as they are enqueued.  This can result in high CPU usage,
>> particularly for MT devices, because the client ends up reading
>> events one at a time instead of reading complete packets.  This patch
>> ensures that the client only wakes from poll() when a complete packet
>> is ready to be read.
>
> Doesn't this only help with very first packet after a pause in event
> stream?
>
> --
> Dmitry
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ