[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190226235912.GL2217@ZenIV.linux.org.uk>
Date: Tue, 26 Feb 2019 23:59:12 +0000
From: Al Viro <viro@...iv.linux.org.uk>
To: Jason Baron <jbaron@...mai.com>
Cc: Rainer Weikusat <rweikusat@...ktalk.net>, netdev@...r.kernel.org
Subject: Re: [RFC] nasty corner case in unix_dgram_sendmsg()
On Tue, Feb 26, 2019 at 03:35:39PM -0500, Jason Baron wrote:
> > I understand what the unix_dgram_peer_wake_me() is doing; I understand
> > what unix_dgram_poll() is using it for. What I do not understand is
> > what's the point of doing that in unix_dgram_sendmsg()...
> >
>
> Hi,
>
> So the unix_dgram_peer_wake_me() in unix_dgram_sendmsg() is there for
> epoll in edge-triggered mode. In that case, we want to ensure that if
> -EAGAIN is returned a subsequent epoll_wait() is not stuck indefinitely.
> Probably could use a comment...
*owwww*
Let me see if I've got it straight - you want the forwarding rearmed,
so that it would match the behaviour of ep_poll_callback() (i.e.
removing only when POLLFREE is passed)? Looks like an odd way to
do it, if that's what's happening...
While we are at it, why disarm a forwarder upon noticing that peer
is dead? Wouldn't it be simpler to move that
wake_up_interruptible_all(&u->peer_wait);
in unix_release_sock() to just before
unix_state_unlock(sk);
a line prior? Then anyone seeing SOCK_DEAD on (locked) peer
would be guaranteed that all forwarders are gone...
Another fun question about the same dgram sendmsg:
if (unix_peer(sk) == other) {
unix_peer(sk) = NULL;
unix_dgram_peer_wake_disconnect_wakeup(sk, other);
unix_state_unlock(sk);
unix_dgram_disconnected(sk, other);
... and we are holding any locks at the last line. What happens
if we have thread A doing
decide which address to talk to
connect(fd, that address)
send request over fd (with send(2) or write(2))
read reply from fd (recv(2) or read(2))
in a loop, with thread B doing explicit sendto(2) over the same
socket?
Suppose B happens to send to the last server thread A was talking
to and finds it just closed (e.g. because the last request from
A had been "shut down", which server has honoured). B gets ECONNREFUSED,
as it ought to, but it can also ends up disrupting the next exchange
of A.
Shouldn't we rather extract the skbs from that queue *before*
dropping sk->lock? E.g. move them to a temporary queue, and flush
that queue after we'd unlocked sk...
Powered by blists - more mailing lists