lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 5 Oct 2006 11:56:24 +0200
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Evgeniy Polyakov <johnpol@....mipt.ru>
Cc:	Ulrich Drepper <drepper@...il.com>,
	lkml <linux-kernel@...r.kernel.org>,
	David Miller <davem@...emloft.net>,
	Ulrich Drepper <drepper@...hat.com>,
	Andrew Morton <akpm@...l.org>, netdev <netdev@...r.kernel.org>,
	Zach Brown <zach.brown@...cle.com>,
	Christoph Hellwig <hch@...radead.org>,
	Chase Venters <chase.venters@...entec.com>,
	Johann Borck <johann.borck@...sedata.com>
Subject: Re: [take19 1/4] kevent: Core files.

On Thursday 05 October 2006 10:57, Evgeniy Polyakov wrote:

> Well, it is possible to create /sys/proc entry for that, and even now
> userspace can grow mapping ring until it is forbiden by kernel, which
> means limit is reached.

No need for yet another /sys/proc entry.

Right now, I (for example) may have a use for Generic event handling, but for 
a program that needs XXX.XXX handles, and about XX.XXX events per second.

Right now, this program uses epoll, and reaches no limit at all, once you pass 
the "ulimit -n", and other kernel wide tunes of course, not related to epoll.

With your current kevent, I cannot switch to it, because of hardcoded limits.

I may be wrong, but what is currently missing for me is :

- No hardcoded limit on the max number of events. (A process that can open 
XXX.XXX files should be allowed to open a kevent queue with at least XXX.XXX 
events). Right now thats not clear what happens IF the current limit is 
reached.

- In order to avoid touching the whole ring buffer, it might be good to be 
able to reset the indexes to the beginning when ring buffer is empty. (So if 
the user land is responsive enough to consume events, only first pages of the 
mapping would be used : that saves L1/L2 cpu caches)

A plus would be

- A working/usable mmap ring buffer implementation, but I think its not 
mandatory. System calls are not that expensive, especially if you can batch 
XX events per syscall (like epoll). Nice thing with a ring buffer is that we 
touch less cache lines than say epoll that have lot of linked structures.

About mmap, I think you might want a hybrid thing :

One writable page where userland can write its index, (and hold one or more 
futex shared by kernel) (with appropriate thread locking in case multiple 
threads want to dequeue events). In fast path, no syscalls are needed to 
maintain this user index.

XXX readonly pages (for user, but r/w for kernel), where kernel write its own 
index, and events of course.

Using separate cache lines avoid false sharing : kernel can update its own 
index and events without having to pay the price of cache line ping pongs.
It could use futex infrastructure to wakeup one thread 'only' instead of all 
threads waiting an event.


Eric
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ