lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1448491950.1848115.450243417.726E2DCB@webmail.messagingengine.com>
Date:	Wed, 25 Nov 2015 23:52:30 +0100
From:	Hannes Frederic Sowa <hannes@...essinduktion.org>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	Rainer Weikusat <rweikusat@...ileactivedefense.com>,
	Eric Dumazet <edumazet@...gle.com>,
	Dmitry Vyukov <dvyukov@...gle.com>,
	Benjamin LaHaise <bcrl@...ck.org>,
	"David S. Miller" <davem@...emloft.net>,
	Al Viro <viro@...iv.linux.org.uk>,
	David Howells <dhowells@...hat.com>,
	Ying Xue <ying.xue@...driver.com>,
	"Eric W. Biederman" <ebiederm@...ssion.com>,
	netdev <netdev@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	syzkaller <syzkaller@...glegroups.com>,
	Kostya Serebryany <kcc@...gle.com>,
	Alexander Potapenko <glider@...gle.com>,
	Sasha Levin <sasha.levin@...cle.com>
Subject: Re: use-after-free in sock_wake_async

On Wed, Nov 25, 2015, at 23:43, Eric Dumazet wrote:
> On Wed, 2015-11-25 at 23:32 +0100, Hannes Frederic Sowa wrote:
> > On Wed, Nov 25, 2015, at 23:09, Eric Dumazet wrote:
> > > On Wed, 2015-11-25 at 20:57 +0000, Rainer Weikusat wrote:
> > > 
> > > > I do agree that keeping the ->sk_data_ready outside of the lock will
> > > > very likely have performance advantages. That's just something I
> > > > wouldn't have undertaken because I'd be reluctant to make a fairly
> > > > complicated change to a lot of code.
> > > 
> > > All I am saying is that we can keep current performance.
> > > 
> > > We already have the core infrastructure, we only need to properly use
> > > it.
> > > 
> > > I will split my changes in two parts.
> > > 
> > > One part doing a very boring change of
> > > 
> > > rename SOCK_ASYNC_NOSPACE and SOCK_ASYNC_WAITDATA
> > > for X in SOCK_ASYNC_NOSPACE SOCK_ASYNC_WAITDATA 
> > > 
> > >  set_bit(X, &sk->sk_socket->flags) -> sk_set_bit(X, sk)
> > >  clear_bit(X, &sk->sk_socket->flags) -> sk_clear_bit(X, sk)
> > 
> > sk_set_bit and sk_clear_bit will forward the set_bit and clear_bit into
> > the socket_wq like you explained above?
> 
> In the first patch (no functional change), the helpers will look like
> 
> static void inline sk_set_bit(int nr, struct sock *sk)
> {
> 	set_bit(nr, &sk->sk_socket->flags);
> }
> 
> 
> Then the second patch will change the helper to :
> 
> static void inline sk_set_bit(int nr, struct sock *sk)
> {
> 	set_bit(nr, &sk->sk_wq_raw->flags);
> }

Yep, that looks sensible.

> > > The rename will help backports to catch code that might have been
> > > removed in recent kernels.
> > > 
> > > Then the second patch will do the actual changes, and they will look
> > > very sensible for people wanting to review them, and or familiar with
> > > the stack, do not worry ;)
> > 
> > Do you see a chance to inline socket_wq into struct socket and discard
> > struct socket_alloc in one go by rcu in socket_destroy_inode?
> 
> ???
> 
> I guess you missed the whole point to have socket_wq allocated outside
> of the inode :(

Yep, sure, so inode could be torn down while wq is freed by rcu
callback.

> inodes are not rcu protected (yet). I certainly don't want to mess with
> VFS, we have enough problems in net/ directory already.

I have seen filesystems already doing so in .destroy_inode, that's why I
am asking. The allocation happens the same way as we do with sock_alloc,
e.g. shmem. I actually thought that struct inode already provides an
rcu_head for exactly that reason.

Bye,
Hannes
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ