lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071114.031022.183117678.davem@davemloft.net>
Date:	Wed, 14 Nov 2007 03:10:22 -0800 (PST)
From:	David Miller <davem@...emloft.net>
To:	nickpiggin@...oo.com.au
Cc:	clameter@....com, netdev@...r.kernel.org,
	herbert@...dor.apana.org.au, linux-kernel@...r.kernel.org
Subject: Re: 2.6.24-rc2: Network commit causes SLUB performance regression
 with tbench

From: Nick Piggin <nickpiggin@...oo.com.au>
Date: Wed, 14 Nov 2007 09:27:39 +1100

> OK, in vanilla kernels, the page allocator definitely shows higher
> in the results (than with Herbert's patch reverted).
 ...
> I can't see that these numbers show much useful, unfortunately.

Thanks for all of this data Nick.

So the thing that's being effected here in TCP is
net/ipv4/tcp.c:select_size(), specifically the else branch:

	int tmp = tp->mss_cache;
 ...
		else {
			int pgbreak = SKB_MAX_HEAD(MAX_TCP_HEADER);

			if (tmp >= pgbreak &&
			    tmp <= pgbreak + (MAX_SKB_FRAGS - 1) * PAGE_SIZE)
				tmp = pgbreak;
		}

This is deciding, in 'tmp', how much linear sk_buff space to
allocate.  'tmp' is initially set to the path MSS, which
for loopback is 16K - the space necessary for packet headers.

The SKB_MAX_HEAD() value has changed as a result of Herbert's
bug fix.   I suspect this 'if' test is passing both with and
without the patch.

But pgbreak is now smaller, and thus the skb->data linear
data area size we choose to use is smaller as well.

You can test if this is precisely what is causing the performance
regression by using the old calculation just here in select_size().

Add something like this local to net/ipv4/tcp.c:

#define OLD_SKB_WITH_OVERHEAD(X)	\
	(((X) - sizeof(struct skb_shared_info)) & \
	 ~(SMP_CACHE_BYTES - 1))
#define OLD_SKB_MAX_ORDER(X, ORDER) \
	OLD_SKB_WITH_OVERHEAD((PAGE_SIZE << (ORDER)) - (X))
#define OLD_SKB_MAX_HEAD(X)		(OLD_SKB_MAX_ORDER((X), 0))

And then use OLD_SKB_MAX_HEAD() in select_size().
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ