[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <521945576.128820.1432830523270.JavaMail.open-xchange@oxuslxltgw11.lxa.perfora.net>
Date: Thu, 28 May 2015 12:28:43 -0400 (EDT)
From: "jsullivan@...nsourcedevel.com" <jsullivan@...nsourcedevel.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Subject: Re: Drops in qdisc on ifb interface
> On May 28, 2015 at 11:45 AM John Fastabend <john.fastabend@...il.com> wrote:
>
>
> On 05/28/2015 08:30 AM, jsullivan@...nsourcedevel.com wrote:
> >
> >> On May 28, 2015 at 11:14 AM Eric Dumazet <eric.dumazet@...il.com> wrote:
> >>
> >>
> >> On Thu, 2015-05-28 at 10:38 -0400, jsullivan@...nsourcedevel.com wrote:
> >>
> > <snip>
> >> IFB has still a long way before being efficient.
> >>
> >> In the mean time, you could play with following patch, and
> >> setup /sys/class/net/eth0/gro_timeout to 20000
> >>
> >> This way, the GRO aggregation will work even at 1Gbps, and your IFB will
> >> get big GRO packets instead of single MSS segments.
> >>
> >> Both IFB but also IP/TCP stack will have less work to do,
> >> and receiver will send fewer ACK packets as well.
> >>
> >> diff --git a/drivers/net/ethernet/intel/igb/igb_main.c
> >> b/drivers/net/ethernet/intel/igb/igb_main.c
> >> index
> >> f287186192bb655ba2dc1a205fb251351d593e98..c37f6657c047d3eb9bd72b647572edd53b1881ac
> >> 100644
> >> --- a/drivers/net/ethernet/intel/igb/igb_main.c
> >> +++ b/drivers/net/ethernet/intel/igb/igb_main.c
> >> @@ -151,7 +151,7 @@ static void igb_setup_dca(struct igb_adapter *);
> >> #endif /* CONFIG_IGB_DCA */
> > <snip>
> >
> > Interesting but this is destined to become a critical production system for
> > a
> > high profile, internationally recognized product so I am hesitant to patch.
> > I
> > doubt I can convince my company to do it but is improving IFB the sort of
> > development effort that could be sponsored and then executed in a moderately
> > short period of time? Thanks - John
> > --
>
> If your experimenting one thing you could do is create many
> ifb devices and load balance across them from tc. I'm not
> sure if this would be practical in your setup or not but might
> be worth trying.
>
> One thing I've been debating adding is the ability to match
> on current cpu_id in tc which would allow you to load balance by
> cpu. I could send you a patch if you wanted to test it. I would
> expect this to help somewhat with 'single queue' issue but sorry
> haven't had time yet to test it out myself.
>
> .John
>
> --
> John Fastabend Intel Corporation
In the meantime, I've noticed something strange. When testing traffic between
the two primary gateways and thus identical traffic flows, I have the bottleneck
on the one which uses two bonded GbE igb interfaces but not on the one which
uses two bonded 10 GbE ixgbe interfaces. The ethtool -k settings are identical,
e.g., gso, gro, lro. The ring buffer is larger on the ixgbe cards but I would
not think that would affect this. Identical kernels. The gateway hardware is
identical and not working hard at all - no CPU or RAM pressure.
Any idea why one bottlenecks and the other does not?
Returning to your idea, John, how would I load balance? I assume I would need to
attach several filters to the physical interfaces each redirecting traffic to
different IFB devices. However, couldn't this work against the traffic shaping?
Let's take an extreme example: all the time sensitive ingress packets find their
way onto ifb0 and all the bulk ingress packets find their way onto ifb1. As
these packets are merged back to the physical interface, wont' they simply be
treated in pfifo_fast (or other physical interface qdisc) order? Thanks - John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists