lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54415FEA.5000705@redhat.com>
Date:	Fri, 17 Oct 2014 11:28:58 -0700
From:	Alexander Duyck <alexander.h.duyck@...hat.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	David Laight <David.Laight@...LAB.COM>,
	"Jiafei.Pan@...escale.com" <Jiafei.Pan@...escale.com>,
	David Miller <davem@...emloft.net>,
	"jkosina@...e.cz" <jkosina@...e.cz>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"LeoLi@...escale.com" <LeoLi@...escale.com>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>
Subject: Re: [PATCH] net: use hardware buffer pool to allocate skb


On 10/17/2014 09:55 AM, Eric Dumazet wrote:
> On Fri, 2014-10-17 at 07:40 -0700, Alexander Duyck wrote:
>> On 10/17/2014 02:11 AM, David Laight wrote:
>>> From: Alexander Duyck
>>> ...
>>>> Actually the likelihood of anything holding onto the 4K page for very
>>>> long doesn't seem to occur, at least from the drivers perspective.  It
>>>> is one of the reasons why I went for the page reuse approach rather than
>>>> just partitioning a single large page.  It allows us to avoid having to
>>>> call IOMMU map/unmap for the pages since the entire page is usually back
>>>> in the driver ownership before we need to reuse the portion given to the
>>>> stack.
>>> That is almost certainly true for most benchmarks, benchmark processes
>>> consume receive data.
>>> But what about real life situations?
>>>
>>> There must be some 'normal' workloads where receive data doesn't get consumed.
>>>
>>> 	David
>>>
>> Yes, but for workloads where receive data doesn't get consumed it is
>> very unlikely that much receive data is generated.
> This is very optimistic.
>
> Any kind of flood can generate 5 or 6 Million packets per second.

That is fine.  The first 256 (default descriptor ring size) might be 4K 
while reporting truesize of 2K, after that each page is guaranteed to be 
split in half so we get at least 2 uses per page.

> So in stress condition, we possibly consume twice more ram than alloted
> in tcp_mem.  (About 3GBytes per second, think about it)

I see what you are trying to get at, but I don't see how my scenerio is 
worse then the setups that use a large page and partition it.

> This is fine, if admins are aware of that and can adjust tcp_mem
> accordingly to this.

I can say I have never had a single report of of us feeding too much 
memory to the sockets, if anything the complaints I have seen have 
always been that the socket is being starved due to too much memory 
being used to move small packets.  That is one of the reasons I decided 
we had to have a copy-break built in for packets 256B and smaller.  It 
doesn't make much sense to allocate 2K + ~1K (skb + skb->head) for 256B 
or less of payload data.

> Apparently none of your customers suffered from this, maybe they had
> enough headroom to absorb the over commit or they trusted us and could not
> find culprit if they had issues.

Correct.  I've never received complaints about memory overcommit. Like I 
have said in most cases we are always getting the page back anyway so we 
usually get a good ROI on the page recycling.

> Open 50,000 tcp sockets, do not read data on 50% of them (pretend you
> are busy on disk access or doing cpu intensive work). As traffic is interleaved
> (between consumed data and non consumed data), you'll have the side
> effect of consuming more ram than advertised.

Yes, but in the case we are talking about it is only off by a factor of 
2.  How do you account for the setups such as the code for allocating an 
skb that is allocating a 32K page over multiple frames. In your setup I 
would suspect it wouldn't be uncommon for the socket to end up with 
multiple spots where only a few sockets are holding the entire 32K page 
for some period of time.  So does that mean we should hit anybody that 
uses netdev_alloc_skb with the overhead for 32K since there are 
scenarios where that can happen?

> Compare /proc/net/protocols (grep TCP /proc/net/protocols) and output of
> 'free', and you'll see that we are not good citizens.

I'm assuming there is some sort of test I should be running while I do 
this?  Otherwise the current dump of those is not too interesting 
currently because my system is idle.

> I will work on TCP stack, to go beyond what I did in commit
> b49960a05e3212 ("tcp: change tcp_adv_win_scale and tcp_rmem[2]")
>
> So that TCP should not care if a driver chose to potentially use 4K per
> MSS.

So long as it doesn't impact performance significantly I am fine with 
it.  My concern is that you are bringing up issues that none of the 
customers were brining up when I was at Intel, and the fixes you are 
proposing are likely to result in customers seeing things they will 
report as issues.

> Right now, it seems we can drop few packets, and get a slight reduction in
> throughput (TCP is very sensitive to losses, even if we drop 0.1 % of packets)

Yes, I am well aware of this bit.  That is my concern.  You increase the 
size for truesize, it will cut the amount of queueing in half, and then 
igb will start seeing drops when it has to deal with bursty traffic and 
people will start to complain about a performance regression.  That is 
the bit I want to avoid.

Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ