lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 07 Nov 2013 20:21:57 -0800
From:	Eric Dumazet <eric.dumazet@...il.com>
To:	Herbert Xu <herbert@...dor.apana.org.au>
Cc:	David Miller <davem@...emloft.net>, bhutchings@...arflare.com,
	christoph.paasch@...ouvain.be, netdev@...r.kernel.org,
	hkchu@...gle.com, mwdalton@...gle.com
Subject: Re: [PATCH v4 net-next] net: introduce dev_set_forwarding()

On Fri, 2013-11-08 at 11:23 +0800, Herbert Xu wrote:
> On Thu, Nov 07, 2013 at 06:51:53PM -0800, Eric Dumazet wrote:
> > On Thu, 2013-11-07 at 18:42 -0800, Eric Dumazet wrote:
> > 
> > > A normal TSO packets with 16 MSS setups a ~17 DMA descriptors,
> > > while GSO requires 2 DMA descriptors per MSS, plus a lot of overhead
> > > in sk_buff allocation/deallocation.
> > 
> > Not mentioning fact that a 64KB packet is adding latencies, since high
> > prio packets have to wait the whole preceding 64KB packet has left the
> > host.
> 
> That would be a bug in the GRO code since a high prio packet
> shouldn't have been merged in the first place and therefore
> the usual priority mechanism should allow it to preempt the
> 64KB packet.

Some users install Qdisc (AQM) on their router, to decide of what is
high priority and what is not. Their iptables or qdisc filters can be
quite complex.

Its all TCP for example.

GRO stack cannot make this decision.

So lets say we receive on ingress a mix of high prio packets and low
prio TCP packets. If GRO stack is able to build super big GRO packet,
then this super big GRO packet is a head of line blocking.

At 1Gbps, a 16 MSS packet is holding the line for about 190 us.

At 45 MSS, you basically multiply by 3 this latency.

What we probably want is a way to tune this latency, not ignore the
problem by making big GRO packets.

The only current choice for the user is to enable or disable GRO per
ingress port.

Thats a trivial patch, but net-next is closed at this moment.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists