lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090204085432.GA21638@1wt.eu>
Date:	Wed, 4 Feb 2009 09:54:32 +0100
From:	Willy Tarreau <w@....eu>
To:	Evgeniy Polyakov <zbr@...emap.net>
Cc:	David Miller <davem@...emloft.net>, herbert@...dor.apana.org.au,
	jarkao2@...il.com, dada1@...mosbay.com, ben@...s.com,
	mingo@...e.hu, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org, jens.axboe@...cle.com
Subject: Re: [PATCH v2] tcp: splice as many packets as possible at once

On Wed, Feb 04, 2009 at 11:12:01AM +0300, Evgeniy Polyakov wrote:
> On Wed, Feb 04, 2009 at 07:19:47AM +0100, Willy Tarreau (w@....eu) wrote:
> > Yes myri10ge for the optimal 4080, but with e1000 too (though I don't
> > remember the exact optimal value, I think it was slightly lower).
> 
> Very likely it is related to the allocator - the same allocation
> overhead to get a page, but 2.5 times bigger frame.
> 
> > For the myri10ge, could this be caused by the cache footprint then ?
> > I can also retry with various values between 4 and 9k, including
> > values close to 8k. Maybe the fact that 4k is better than 9 is
> > because we get better filling of all pages ?
> > 
> > I also remember having used a 7 kB MTU on e1000 and dl2k in the past.
> > BTW, 7k MTU on my NFS server which uses e1000 definitely stopped the
> > allocation failures which were polluting the logs, so it's been running
> > with that setting for years now.
> 
> Recent e1000 (e1000e) uses fragments, so it does not suffer from the
> high-order allocation failures.

My server is running 2.4 :-), but I observed the same issues with older
2.6 as well. I can certainly imagine that things have changed a lot since,
but the initial point remains : jumbo frames are expensive to deal with,
and with recent NICs and drivers, we might get close performance for
little additional cost. After all, initial justification for jumbo frames
was the devastating interrupt rate and all NICs coalesce interrupts now.

So if we can optimize all the infrastructure for extremely fast
processing of standard frames (1500) and still support jumbo frames
in a suboptimal mode, I think it could be a very good trade-off.

Regards,
willy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ