lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1195170691.5745.10.camel@dell>
Date:	Thu, 15 Nov 2007 15:51:31 -0800
From:	"Michael Chan" <mchan@...adcom.com>
To:	"David Miller" <davem@...emloft.net>
cc:	mcarlson@...adcom.com, "netdev" <netdev@...r.kernel.org>,
	andy@...yhouse.net
Subject: Re: [PATCH 10/13] tg3: Increase the PCI MRRS

On Thu, 2007-11-15 at 14:41 -0800, David Miller wrote:
> From: "Matt Carlson" <mcarlson@...adcom.com>
> Date: Thu, 15 Nov 2007 14:20:10 -0800
> >
> > Keeping the MRRS at 512 introduces DMA latencies that effectively
> > prevent us from achieving linerate.  With a packet size of ~1.5K and the
> > MRRS at 512 bytes, the DMA will be broken into at least 3 DMA reads.
> > Each DMA read takes ~1usec to initiate.  It is this overhead that starts
> > to cut into total throughput.
> 
> Ok, but wouldn't every networking device on PCI need to do this then?

No, it depends on the design.  For example, a bigger maximum payload
size will alleviate the problem (tg3 hardware is using 128).  Multiple
DMA read channels to pipeline the multiple read requests will also help.

We don't need to increase the MRRS on bnx2 hardware to get line-rate,
for example.

> 
> I want to hear what you think about this wrt. what I mentioned about
> fairness above.  What's the point of PCI specifying a limit to comply
> with if nobody complies with the limit for localized performance
> reasons?

Fairness is harder to know because it depends on chipset behavior.  PCIE
is not shared so there's no fairness issue at the local PCIE.  Any
fairness issue will be at the bridge.

I don't know what's the exact rationale for defaulting to 512, but I
will try to find out.

> 
> I think this is an important issue.  Someone down the road is going to
> see bad disk throughput when doing lots of network transfers and
> wonder why that is.  It will be hard to debug, but it won't be
> difficult for us to do something proactive about this right now to
> prevent that problem from happening in the first place.
> 
> Thanks.
> 

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ