lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1767970154.119773.1432823923389.JavaMail.open-xchange@oxuslxltgw11.lxa.perfora.net>
Date:	Thu, 28 May 2015 10:38:43 -0400 (EDT)
From:	"jsullivan@...nsourcedevel.com" <jsullivan@...nsourcedevel.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
Cc:	netdev@...r.kernel.org
Subject: Re: Drops in qdisc on ifb interface


> On May 25, 2015 at 6:31 PM Eric Dumazet <eric.dumazet@...il.com> wrote:
>
>
> On Mon, 2015-05-25 at 16:05 -0400, John A. Sullivan III wrote:
> > Hello, all. One one of our connections we are doing intensive traffic
> > shaping with tc. We are using ifb interfaces for shaping ingress
> > traffic and we also use ifb interfaces for egress so that we can apply
> > the same set of rules to multiple interfaces (e.g., tun and eth
> > interfaces operating on the same physical interface).
> >
> > These are running on very powerful gateways; I have watched them
> > handling 16 Gbps with CPU utilization at a handful of percent. Yet, I
> > am seeing drops on the ifb interfaces when I do a tc -s qdisc show.
> >
> > Why would this be? I would expect if there was some kind of problem that
> > it would manifest as drops on the physical interfaces and not the IFB
> > interface. We have played with queue lengths in both directions. We
> > are using HFSC with SFQ leaves so I would imagine this overrides the
> > very short qlen on the IFB interfaces (32). These are drops and not
> > overlimits.
>
> IFB is single threaded and a serious bottleneck.
>
> Don't use this on egress, this destroys multiqueue capaility.
>
> And SFQ is pretty limited (127 packets)
>
> You might try to change your NIC to have a single queue for RX,
> so that you have a single cpu feeding your IFB queue.
>
> (ethtool -L eth0 rx 1)
>
>
>
>
>
This has been an interesting exercise - thank you for your help along the way,
Eric.  IFB did not seem to bottleneck in our initial testing but there was
really only one flow of traffic during the test at around 1 Gbps.  However, on a
non-test system with many different flows, IFB does seem to be a serious
bottleneck - I assume this is the consequence of being single-threaded.

Single queue did not seem to help.

Am I correct to assume that IFB would be as much as a bottleneck on the ingress
side as it would be on the egress side? If so, is there any way to do high
performance ingress traffic shaping on Linux - a multi-threaded version of IFB
or a different approach? Thanks - John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ