lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Sep 2016 07:41:43 +0000
From:   "Mintz, Yuval" <Yuval.Mintz@...ium.com>
To:     Jason Baron <jbaron@...mai.com>,
        "davem@...emloft.net" <davem@...emloft.net>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "Ariel.Elior@...gic.com" <Ariel.Elior@...gic.com>
Subject: RE: [PATCH net-next 2/2] bnx2x: allocate mac filtering pending list
 in PAGE_SIZE increments

> >> Currently, we can have high order page allocations that specify
> >> GFP_ATOMIC when configuring multicast MAC address filters.
> >>
> >> For example, we have seen order 2 page allocation failures with
> >> ~500 multicast addresses configured.
> >>
> >> Convert the allocation for the pending list to be done in PAGE_SIZE
> >> increments.
> >>
> >> Signed-off-by: Jason Baron <jbaron@...mai.com>
> >
> > While I appreciate the effort, I wonder whether it's worth it:
> >
> > - The hardware [even in its newer generation] provides an approximate
> > based classification [I.e., hashed] with 256 bins.
> > When configuring 500 multicast addresses, one can argue the difference
> > between multicast-promisc mode and actual configuration is
> > insignificant.
> 
> With 256 bins, I think it takes close to: 256*lg(256) or 2,048 multicast addresses
> to expect to have all bins have at least one hash, assuming a uniform distribution
> of the hashes.
> 
> > Perhaps the easier-to-maintain alternative would simply be to
> > determine the maximal number of multicast addresses that can be
> > configured using a single PAGE, and if in need of more than that
> > simply move into multicast-promisc.
> >
> 
> sizeof(struct bnx2x_mcast_list_elem) = 24. So there are 170 per page on x86. So
> if we want to fit 2,048 elements, we need 12 pages.

That's not exactly what I mean - let's assume you'd have problems
allocating more than a PAGE. According to your calculation, that
means you're already using more than 170 multicast addresses.
I didn't bother trying to solve the combinatorics question of how
many bins you'd use on average for 170 filters given there are only
256 bins, but that would be a significant portion.
The question I rose was whether it actually makes a difference
under such circumstances whether the device would actually filter
those multicast addresses or be completely multicast promiscuous.
e.g., whether it's significant to be filtering out multicast ingress
traffic when you're already allowing 1/2 of all random multicast
packets to be classified for the interface.

But again, given that you've actually taken the trouble of solving
this, I guess this question is mostly theoretical. We HAVE a better
solution now [a.k.a., yours ;-) ]

> I think it would be easy to add a check to bnx2x_set_rx_mode_inner() to enforce
> some maximum number of elements (perhaps 2,048 based on the above math)
> for the !CHIP_IS_E1() case on top of what I already posted.

The benefit should have been that we could have dropped your
Solution by limiting the driver to use at most the number of filters
that would fit in a single page.
I don't think it would serve any purpose to take your change and
in addition choose some combinatorics based upper limit.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ