lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54417044.5090602@gmail.com>
Date:	Fri, 17 Oct 2014 12:38:44 -0700
From:	Alexander Duyck <alexander.duyck@...il.com>
To:	Eric Dumazet <eric.dumazet@...il.com>,
	Alexander Duyck <alexander.h.duyck@...hat.com>
CC:	David Laight <David.Laight@...LAB.COM>,
	"Jiafei.Pan@...escale.com" <Jiafei.Pan@...escale.com>,
	David Miller <davem@...emloft.net>,
	"jkosina@...e.cz" <jkosina@...e.cz>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"LeoLi@...escale.com" <LeoLi@...escale.com>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>
Subject: Re: [PATCH] net: use hardware buffer pool to allocate skb

On 10/17/2014 12:02 PM, Eric Dumazet wrote:
> On Fri, 2014-10-17 at 11:28 -0700, Alexander Duyck wrote:
>
>> So long as it doesn't impact performance significantly I am fine with 
>> it.  My concern is that you are bringing up issues that none of the 
>> customers were brining up when I was at Intel, and the fixes you are 
>> proposing are likely to result in customers seeing things they will 
>> report as issues.
> We regularly hit these issues at Google.
>
> We have memory containers, and we detect quite easily if some layer is
> lying, because we cant afford having 20% of headroom on our servers.

Data is useful here.  If you can give enough data about the setup to
reproduce it then we can actually start looking at fixing it.  Otherwise
it is just your anecdotal data versus mine.

> I am not claiming IGB is the only offender.
>
> I am sorry if you believe it was an attack on IGB, or any network
> driver.

My concern is that igb is guilty of being off by at most a factor of 2. 
What about the drivers and implementations that are off by possibly much
larger values?  I'd be much more interested in this if there was data to
back up your position.  Right now it is mostly just conjecture and my
concern is that this type of change may cause more harm than good.

> truesize should really be the thing that protects us from OOM,
> and apparently driver authors hitting TCP performance problems
> thinks it is better to simply let TCP do no garbage collection.

One key point to keep in mind is that the igb performance should take a
pretty hard hit if pages aren't being freed.  The overhead difference
between the page reuse path vs non-page reuse is pretty significant.  If
this is a scenerio you are actually seeing this would be of interest.

> It seems that nobody cares or even understand what I am saying,
> so I should probably not care and suggest Google to buy PetaBytes of
> memory, right ?

That's not what I am saying, but there is a trade-off we always have to
take into account.  Cutting memory overhead will likely have an impact
on performance.  I would like to make the best informed trade-off in
that regard rather than just assuming worst case always for the driver.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ