lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061102084608.GB22909@2ka.mipt.ru>
Date:	Thu, 2 Nov 2006 11:46:08 +0300
From:	Evgeniy Polyakov <johnpol@....mipt.ru>
To:	Eric Dumazet <dada1@...mosbay.com>
Cc:	zhou drangon <drangon.mail@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [take22 0/4] kevent: Generic event handling mechanism.

On Thu, Nov 02, 2006 at 09:18:55AM +0100, Eric Dumazet (dada1@...mosbay.com) wrote:
> Evgeniy Polyakov a écrit :
> >pipes will work with kevent's poll mechanisms only, so there will not be
> >any performance gain at all since it is essentially the same as epoll
> >design with waiting and rescheduling (all my measurements with 
> >epoll vs. kevent_poll always showed the same rates), pipes require the same
> >notifications as sockets for maximum perfomance.
> >I've put it into todo list.
> 
> Evgeniy I think this part is *important*. I think most readers of lkml are 
> not aware of exact mechanisms used in epoll, kevent poll, and 'kevent'
> 
> I dont understand why epoll is bad for you, since for me, 
> ep_poll_callback() is fast enough, even if we can make it touch less cache 
> lines if reoredering 'struct epitem' correctly. My epoll_pipe_bench doesnt 
> change the rescheduling rate of the test machine.
> 
> Could you in your home page add some doc that clearly show the path taken 
> for those 3 mechanisms and different events sources (At least sockets)

It is.

"It [kevent] supports socket notifications (accept, sending and receiving),
network AIO (aio_send(), aio_recv() and aio_sendfile()), inode
notifications (create/remove), generic poll()/select() notifications and
timer notifications."
In each patch I give a short description and socket notification patch

By poll design we have to setup following data:
poll_table_struct, which contains a callback
that callback will be called in each
sys_poll()->drivers_poll()->poll_wait(),
callback will allocate new private structure, which must have
wait_queue_t (it's callback will be invoked each time wake_up() is
called for given wait_queue_head), which should be linked to the given
wait_queue_head.

Kevent has different approach: so called origins (files, inodes,
sockets and so on) have a queues of userspace requests, for example
socket origin can only have a queue which will contain one of the
following events ($type.$event): socket.send, socket.recv,
socket.accept. So when new data has arrived, appropriate event is marked
as ready and moved into ready queue (very short operations) and
requested thread is awakened, which can then get ready events and
requeue them back (or remove, depending on flags). There are no
allocations in kevent_get_events() (epoll_wait() does not have it too),
no potentially long lists of wait_queue linked to the same 
wait_queue_head_t, which is traversed each time we call wake_up(),
it has much smaller memory footprint compared to epoll (there is only
one kevent compared to epitem and eppoll_entry).

> Eric

-- 
	Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ