lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <555F8A9E.3050809@android.com>
Date:	Fri, 22 May 2015 12:59:26 -0700
From:	Mark Salyzyn <salyzyn@...roid.com>
To:	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	Hannes Frederic Sowa <hannes@...hat.com>
CC:	linux-kernel@...r.kernel.org,
	"David S. Miller" <davem@...emloft.net>,
	Al Viro <viro@...iv.linux.org.uk>,
	David Howells <dhowells@...hat.com>,
	Ying Xue <ying.xue@...driver.com>,
	Christoph Hellwig <hch@....de>, netdev@...r.kernel.org
Subject: Re: net/unix: sk_socket can disappear when state is unlocked

On 05/22/2015 11:16 AM, Hannes Frederic Sowa wrote:
> On Fri, May 22, 2015, at 18:24, Mark Salyzyn wrote:
>> On 05/22/2015 08:35 AM, Hannes Frederic Sowa wrote:
>>> I still wonder if we need to actually recheck the condition and not
>>> simply break out of unix_stream_data_wait:
>>>
>>> We return to the unix_stream_recvmsg loop and recheck the
>>> sk_receive_queue. At this point sk_receive_queue is not really protected
>>> with unix_state_lock against concurrent modification with unix_release,
>>> as such we could end up concurrently dequeueing packets if socket is
>>> DEAD.
>> sock destroy(sic) is called before sock_orphan which sets SOCK_DEAD, so
>> the receive queue has already been drained.
> I am still afraid that there is a race:
>
> When we break out in unix_stream_data_wait we most of the time hit the
> continue statement in unix_stream_recvmsg. Albeit we acquired state lock
> again, we could end up in a situation where the sk_receive_queue is not
> completely drained. We would miss the recheck of the sk_shutdown mask,
> because it is possible we dequeue a non-null skb from the receive queue.
> This is because unix_release_sock acquires state lock, sets appropriate
> flags but the draining of the receive queue does happen without locks,
> state lock is unlocked before that. So theoretically both, release_sock
> and recvmsg could dequeue skbs concurrently in nondeterministic
> behavior.
>
> The fix would be to recheck SOCK_DEAD or even better, sk_shutdown right
> after we reacquired state_lock and break out of the loop altogether,
> maybe with -ECONNRESET.
>
> Thanks,
> Hannes
I am trying to figure out _how_ to appease your worries.

Keep in mind what I hit was rare already, and resulted in a panic. 
Nondeterministic packet delivery during shutdown is a given, but if I 
buy that one can receive another frame after packet flush and 
RCV_SHUTDOWN, and SOCK_DEAD is set under lock then returning to the 
thread in wait, would you be more comfortable with:

         do {
                 int chunk;
                 struct sk_buff *skb, *last;

                 unix_state_lock(sk);
                 last = skb = skb_peek(&sk->sk_receive_queue);
  again:
-               if (skb == NULL) {
+               if (!skb || sock_flag(sk, SOCK_DEAD)) {
                         unix_sk(sk)->recursion_level = 0;
                         if (copied >= target)
                                 goto unlock;

- or -

+               skb = NULL;
+               if (!sock_flag(sk, SOCK_DEAD))   // check after loop, 
but not in again loop?
+                       skb = skb_peek(&sk->sk_receive_queue
+               last = skb;

I know this does not give you -ECONNRESET (but we will we get 
sock_error(sk) disposition, another check for sock_flag if err == 0 
could fix that)

Too far to deal with nondeterministic packet flow? getting a last packet 
or not does not seem worth the cycles of CPU trouble?

Sincerely -- Mark Salyzyn

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ