lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 11 Nov 2015 20:14:19 +0100
From:	Hannes Frederic Sowa <hannes@...essinduktion.org>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev@...r.kernel.org, Dmitry Vyukov <dvyukov@...gle.com>
Subject: Re: [PATCH net v3] af-unix: fix use-after-free with concurrent readers
 while splicing

On Wed, Nov 11, 2015, at 19:58, Eric Dumazet wrote:
> On Tue, 2015-11-10 at 16:23 +0100, Hannes Frederic Sowa wrote:
> > During splicing an af-unix socket to a pipe we have to drop all
> > af-unix socket locks. While doing so we allow another reader to enter
> > unix_stream_read_generic which can read, copy and finally free another
> > skb. If exactly this skb is just in process of being spliced we get a
> > use-after-free report by kasan.
> > 
> > First, we must make sure to not have a free while the skb is used during
> > the splice operation. We simply increment its use counter before unlocking
> > the reader lock.
> > 
> > Stream sockets have the nice characteristic that we don't care about
> > zero length writes and they never reach the peer socket's queue. That
> > said, we can take the UNIXCB.consumed field as the indicator if the
> > skb was already freed from the socket's receive queue. If the skb was
> > fully consumed after we locked the reader side again we know it has been
> > dropped by a second reader. We indicate a short read to user space and
> > abort the current splice operation.
> > 
> > This bug has been found with syzkaller
> > (http://github.com/google/syzkaller) by Dmitry Vyukov.
> > 
> > Fixes: 2b514574f7e8 ("net: af_unix: implement splice for stream af_unix sockets")
> > Reported-by: Dmitry Vyukov <dvyukov@...gle.com>
> > Cc: Dmitry Vyukov <dvyukov@...gle.com>
> > Cc: Eric Dumazet <eric.dumazet@...il.com>
> > Acked-by: Eric Dumazet <edumazet@...gle.com>
> > Signed-off-by: Hannes Frederic Sowa <hannes@...essinduktion.org>
> > ---
> > v2: add missing consume_skb in error path of recv_actor
> > v3: move skb_get to separate line as proposed by Eric Dumazet (thanks!)
> > 
> 
> I believe there is another bug in unix_stream_sendpage()
> 
> skb = skb_peek_tail(&other->sk_receive_queue);   
> if (tail && tail == skb) {
> 
> Is clearly not safe ?

Can you elaborate?

I use tail as a cookie and check if we already tried to append to the
same tail skb with skb_append_pagefrags. If during allocation, which we
do outside of the locks, a new skb arrives, we take that and try to
append again (and free the old skb), to correctly not create any
reordering in the data stream.

You think that tail could be reused in the meanwhile?

Bye,
Hannes
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ