[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1348085937.2636.54.camel@bwh-desktop.uk.solarflarecom.com>
Date: Wed, 19 Sep 2012 21:18:57 +0100
From: Ben Hutchings <bhutchings@...arflare.com>
To: Eric Dumazet <eric.dumazet@...il.com>
CC: David Miller <davem@...emloft.net>, <netdev@...r.kernel.org>
Subject: Re: [RFC] tcp: use order-3 pages in tcp_sendmsg()
On Wed, 2012-09-19 at 17:14 +0200, Eric Dumazet wrote:
> On Mon, 2012-09-17 at 13:07 -0400, David Miller wrote:
> > From: Eric Dumazet <eric.dumazet@...il.com>
> > Date: Mon, 17 Sep 2012 19:04:53 +0200
> >
> > > On Mon, 2012-09-17 at 19:02 +0200, Eric Dumazet wrote:
> > >
> > >> A driver already exports a dev->gso_max_size, dev->gso_max_segs, I guess
> > >> it could export a dev->max_seg_order (default to 0)
> > >
> > > Oh well, if we use a per thread order-3 page, a driver wont define an
> > > order, but the max size of a segment (dev->max_seg_size).
> >
> > Since you said that your audit showed that most can handle arbitrary
> > segment sizes, it's better to default to infinity or similar.
> >
> > Otherwise we'll have to annotate almost every single driver with a
> > non-zero value, that's not an efficient way to handle this and
> > deploy the higher performance quickly.
>
> I did some tests and got no problem so far, even using splice() [ this
> one was tricky because it only deals with order-0 pages at this moment ]
>
> NIC tested : ixgbe, igb, bnx2x, tg3, mellanox mlx4
I think sfc would also be fine with this; we split at 4K boundaries
regardless of the host page size.
My only concern is fragmentation on busy machines making high-order
allocations more prone to failure (though this change might well slow
that fragmentation). The larger allocation size should at least be made
dependent on (sk->sk_allocation & GFP_KERNEL) == GPF_KERNEL. (Even
then, I've seen some stress test failures where ring reallocation
(similar size, GFP_KERNEL) fails. But those were done with an older
kernel version and the current mm should do better.)
Ben.
> On loopback, performance of netperf goes from 31900 Mb/s to 38500 Mb/s,
> thats a 20 % increase.
--
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists