[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m1ir9eaahu.fsf@ebiederm.dsl.xmission.com>
Date: Sat, 23 Jun 2007 10:42:05 -0600
From: ebiederm@...ssion.com (Eric W. Biederman)
To: Miklos Szeredi <miklos@...redi.hu>
Cc: davem@...emloft.net, viro@....linux.org.uk,
alan@...rguk.ukuu.org.uk, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] fix race in AF_UNIX
Miklos Szeredi <miklos@...redi.hu> writes:
> Right. But the devil is in the details, and (as you correctly point
> out later) to implement this, the whole locking scheme needs to be
> overhauled. Problems:
>
> - Using the queue lock to make the dequeue and the fd detach atomic
> wrt the GC is difficult, if not impossible: they are are far from
> each other with various magic in between. It would need thorough
> understanding of these functions and _big_ changes to implement.
>
> - Sleeping on u->readlock in GC is currently not possible, since that
> could deadlock with unix_dgram_recvmsg(). That function could
> probably be modified to release u->readlock, while waiting for
> data, similarly to unix_stream_recvmsg() at the cost of some added
> complexity.
>
> - Sleeping on u->readlock is also impossible, because GC is holding
> unix_table_lock for the whole operation. We could release
> unix_table_lock, but then would have to cope with sockets coming
> and going, making the current socket iterator unworkable.
>
> So theoretically it's quite simple, but it needs big changes. And
> this wouldn't even solve all the problems with the GC, like being a
> possible DoS vector.
Making the GC fully incremental will solve the DoS vector problem as
well. Basically you do a fixed amount of reclaim in the new socket
allocation code.
It appears clear that since we can't stop the world and garbage
collect we need an incremental collector.
Eric
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists