lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 22 Nov 2015 21:43:58 +0000
From:	Rainer Weikusat <rweikusat@...ileactivedefense.com>
To:	Rainer Weikusat <rweikusat@...ileactivedefense.com>
Cc:	Jason Baron <jbaron@...mai.com>,
	Dmitry Vyukov <dvyukov@...gle.com>,
	syzkaller <syzkaller@...glegroups.com>,
	Michal Kubecek <mkubecek@...e.cz>,
	Al Viro <viro@...iv.linux.org.uk>,
	"linux-fsdevel\@vger.kernel.org" <linux-fsdevel@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	David Miller <davem@...emloft.net>,
	Hannes Frederic Sowa <hannes@...essinduktion.org>,
	David Howells <dhowells@...hat.com>,
	Paul Moore <paul@...l-moore.com>, salyzyn@...roid.com,
	sds@...ho.nsa.gov, ying.xue@...driver.com,
	netdev <netdev@...r.kernel.org>,
	Kostya Serebryany <kcc@...gle.com>,
	Alexander Potapenko <glider@...gle.com>,
	Andrey Konovalov <andreyknvl@...gle.com>,
	Sasha Levin <sasha.levin@...cle.com>,
	Julien Tinnes <jln@...gle.com>,
	Kees Cook <keescook@...gle.com>,
	Mathias Krause <minipli@...glemail.com>
Subject: alternate queueing mechanism (was: [PATCH] unix: avoid use-after-free in ep_remove_wait_queue)

Rainer Weikusat <rweikusat@...ileactivedefense.com> writes:

[AF_UNIX SOCK_DGRAM throughput]

> It may be possible to improve this by tuning/ changing the flow
> control mechanism. Out of my head, I'd suggest making the queue longer
> (the default value is 10) and delaying wake ups until the server
> actually did catch up, IOW, the receive queue is empty or almost
> empty. But this ought to be done with a different patch.

Because I was curious about the effects, I implemented this using a
slightly modified design than the one I originally suggested to account
for the different uses of the 'is the receive queue full' check. The
code uses a datagram-specific checking function,

static int unix_dgram_recvq_full(struct sock const *sk)
{
	struct unix_sock *u;

	u = unix_sk(sk);
	if (test_bit(UNIX_DG_FULL, &u->flags))
		return 1;

	if (!unix_recvq_full(sk))
		return 0;

	__set_bit(UNIX_DG_FULL, &u->flags);
	return 1;
}

which gets called instead of the other for the n:1 datagram checks and a

if (test_bit(UNIX_DG_FULL, &u->flags) &&
    !skb_queue_len(&sk->sk_receive_queue)) {
	__clear_bit(UNIX_DG_FULL, &u->flags);
	wake_up_interruptible_sync_poll(&u->peer_wait,
					POLLOUT | POLLWRNORM |
					POLLWRBAND);
}

in unix_dgram_recvmsg to delay wakeups until the queued datagrams have
been consumed if the queue overflowed before. This has the additional,
nice side effect that wakeups won't ever be done for 1:1 connected
datagram sockets (both SOCK_DGRAM and SOCK_SEQPACKET) where they're of
no use, anyway.

Compared to a 'stock' 4.3 running the test program I posted (supposed to
make the overhead noticable by sending lots of small messages), the
average number of bytes sent per second increased by about 782,961.79
(ca 764.61K), about 5.32% of the 4.3 number (14,714,579.91), with a
fairly simple code change.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ