lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <bc84e68c0980466096b0d2f6aec95747@AcuMS.aculab.com>
Date:   Fri, 22 Nov 2019 11:17:51 +0000
From:   David Laight <David.Laight@...LAB.COM>
To:     linux-kernel <linux-kernel@...r.kernel.org>
CC:     network dev <netdev@...r.kernel.org>
Subject: epoll_wait() performance

I'm trying to optimise some code that reads UDP messages (RTP and RTCP) from a lot of sockets.
The 'normal' data pattern is that there is no data on half the sockets (RTCP) and
one message every 20ms on the others (RTP).
However there can be more than one message on each socket, and they all need to be read.
Since the code processing the data runs every 10ms, the message receiving code
also runs every 10ms (a massive gain when using poll()).

While using recvmmsg() to read multiple messages might seem a good idea, it is much
slower than recv() when there is only one message (even recvmsg() is a lot slower).
(I'm not sure why the code paths are so slow, I suspect it is all the copy_from_user()
and faffing with the user iov[].)

So using poll() we repoll the fd after calling recv() to find is there is a second message.
However the second poll has a significant performance cost (but less than using recvmmsg()).

If we use epoll() in level triggered mode a second epoll_wait() call (after the recv()) will
indicate that there is more data.

For poll() it doesn't make much difference how many fd are supplied to each system call.
The overall performance is much the same for 32, 64 or 500 (all the sockets).

For epoll_wait() that isn't true.
Supplying a buffer that is shorter than the list of 'ready' fds gives a massive penalty.
With a buffer long enough for all the events epoll() is somewhat faster than poll().
But with a 64 entry buffer it is much slower.
I've looked at the code and can't see why splicing the unread events back is expensive.

I'd like to be able to change the code so that multiple threads are reading from the epoll fd.
This would mean I'd have to run it in edge mode and each thread reading a smallish
block of events.
Any suggestions on how to efficiently read the 'unusual' additional messages from
the sockets?

FWIW the fastest way to read 1 RTP message every 20ms is to do non-blocking recv() every 10ms.
The failing recv() is actually faster than either epoll() or two poll() actions.
(Although something is needed to pick up the occasional second message.) 

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ