lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57E15390.9030306@akamai.com>
Date:   Tue, 20 Sep 2016 11:19:44 -0400
From:   Jason Baron <jbaron@...mai.com>
To:     "Mintz, Yuval" <Yuval.Mintz@...ium.com>,
        "davem@...emloft.net" <davem@...emloft.net>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "Ariel.Elior@...gic.com" <Ariel.Elior@...gic.com>
Subject: Re: [PATCH net-next 2/2] bnx2x: allocate mac filtering pending list
 in PAGE_SIZE increments



On 09/20/2016 11:00 AM, Mintz, Yuval wrote:
>>> The question I rose was whether it actually makes a difference under
>>> such circumstances whether the device would actually filter those
>>> multicast addresses or be completely multicast promiscuous.
>>> e.g., whether it's significant to be filtering out multicast ingress
>>> traffic when you're already allowing 1/2 of all random multicast
>>> packets to be classified for the interface.
>>>
>>
>> Agreed, I think this is the more interesting question here. I thought that we
>> would want to make sure we are using most of the bins before falling back to
>> multicast ingress. The reason being that even if its more expensive for the NIC to
>> do the filtering than the multicast mode, it would be more than made up for by
>> having to drop the traffic higher up the stack. So I think if we can determine the
>> percent of the bins that we want to use, we can then back into the average
>> number of filters required to get there. As I said, I thought we would want to
>> make sure we filled basically all the bins (with a high probability that is) before
>> falling back to multicast, and so I threw out 2,048.
>
> AFAIK configuring multiple filters doesn't incur any performance penalty
> from the adapter side.
> And I agree that from 'offloading' perspective it's probably better to
> filter in HW even if the gain is negligible.
> So for the upper limit - there's not much of a reason to it; The only gain
> would be to prevent driver from allocating lots-and-lots of memory
> temporarily for an unnecessary configuration.
>

Ok. We already have an upper limit to an extent with 
/proc/sys/net/ipv4/igmp_max_memberships. And as posted I didn't include 
one b/c of the higher level limits already in place.

Thanks,

-Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ