lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a36005b50705040828l28f1d039oe58224ff18286ecd@mail.gmail.com>
Date:	Fri, 4 May 2007 08:28:43 -0700
From:	"Ulrich Drepper" <drepper@...il.com>
To:	"Davide Libenzi" <davidel@...ilserver.org>
Cc:	"Davi Arnaut" <davi@...ent.com.br>,
	"Eric Dumazet" <dada1@...mosbay.com>,
	"Andrew Morton" <akpm@...ux-foundation.org>,
	"Linus Torvalds" <torvalds@...ux-foundation.org>,
	"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>
Subject: Re: [patch 14/22] pollfs: pollable futex

On 5/3/07, Davide Libenzi <davidel@...ilserver.org> wrote:
> Why is that futexes *must* be part of the "whole solution"? Ppl needs
> solutions to specific problems, not an bloated interface that, like a
> giant blob, includes everything just because it exists.

Sync objects are essential parts of many programs today and most
programs tomorrow.  Currently you cannot efficiently implement working
on multiple independent areas which are protected through some sync
object (mutex, condvar, ...).  You have to create a separate thread
for each.  Looping with the nonlocking mutex, for instance, is no
possibility.  This is solved by being able to get events for the
availability of the sync object.

And before you start and claim that this is no common cases take a
look at the waitformultipleobjects (with studdly caps somewhere) for
windows' API.  The actual interface is horrible, but the concept is
sound (it comes from VMS).  This is the basis of many programs on that
platform.  Basically, the central loop contains such a call.
Currently programs would have to be completely redesigned when ported
to Linux if they use any object which cannot be waited on.

There is much more.  As I tried to point out in last year's OLS paper,
central loops around such a call are the perfect scalability mechanism
and this is what is needed for the processors from today and tomorrow.


> Before you try to bash a solution becuase it's costly, then you bounce
> back from another angle, and say that a solution (pipes) that uses 2
> descriptors, one file, one inode, one dentry and 4KB of kernel memory for
> each instance, is a perfectly legal solution.

Stop.  I call the proposed code costly in terms of the code added to
the kernel which must be maintained and kept in mind when writing the
real next-gen event mechanism.  Not having this code in the kernel
certainly would make a difference.


> Fast, I think we have that pretty much covered with Ingo poiting out a few
> flaws in the numbers posted previously. Nice, I'll leave that out.

You again miss the context.  I was talking about the pipe-based
solution using a signal handler.


> Epoll scales and already covers a large amount of things you may be
> interested in receiving events from. Basically everything that have a
> working f_op->poll.

epoll doesn't scale if every thread needs its own epoll set.  Beside
the overhead this also has huge program design problems: how do you
atomically remove a file descriptor from a collection of epoll sets?


> The other big piece is AIO. Now you can have *another* layer on top of
> AIO, that is included in your blob interface, but why?

I don't know how you arrive at AIO now.  kevent itself is independent
of the AIO code which was done at the same time by the same person.
It was just one kernel service which uses the event functionality.
The two must be judged independently.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ