[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-+1ukXZoBq4L__8yA9f-OAJGU4=1WrJd9nH+uvkOy5ZLw@mail.gmail.com>
Date: Thu, 1 Dec 2016 14:24:25 -0500
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Tom Herbert <tom@...bertland.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Willem de Bruijn <willemb@...gle.com>,
Rick Jones <rick.jones2@....com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>,
Saeed Mahameed <saeedm@...lanox.com>,
Tariq Toukan <tariqt@...lanox.com>,
Achiad Shochat <achiad@...lanox.com>
Subject: Re: [WIP] net+mlx4: auto doorbell
>>> > So we end up with one cpu doing the ndo_start_xmit() and TX completions,
>>> > and RX work.
This problem is somewhat tangential to the doorbell avoidance discussion.
>>> >
>>> > This problem is magnified when XPS is used, if one mono-threaded application deals with
>>> > thousands of TCP sockets.
>>> >
>> We have thousands of applications, and some of them 'kind of multicast'
>> events to a broad number of TCP sockets.
>>
>> Very often, applications writers use a single thread for doing this,
>> when all they need is to send small packets to 10,000 sockets, and they
>> do not really care of doing this very fast. They also do not want to
>> hurt other applications sharing the NIC.
>>
>> Very often, process scheduler will also run this single thread in a
>> single cpu, ie avoiding expensive migrations if they are not needed.
>>
>> Problem is this behavior locks one TX queue for the duration of the
>> multicast, since XPS will force all the TX packets to go to one TX
>> queue.
>>
> The fact that XPS is forcing TX packets to go over one CPU is a result
> of how you've configured XPS. There are other potentially
> configurations that present different tradeoffs.
Right, XPS supports multiple txqueues mappings, using skb_tx_hash
to decide among them. Unfortunately cross-cpu is more expensive
than tx + completion on the same core, so this is suboptimal for
the common case where there is no excessive load imbalance.
> For instance, TX
> queues are plentiful now days to the point that we could map a number
> of queues to each CPU while respecting NUMA locality between the
> sending CPU and where TX completions occur. If the set is sufficiently
> large this would also helps to address device lock contention. The
> other thing I'm wondering is if Willem's concepts distributing DOS
> attacks in RPS might be applicable in XPS. If we could detect that a
> TX queue is "under attack" maybe we can automatically backoff to
> distributing the load to more queues in XPS somehow.
If only targeting states of imbalance, that indeed could work. For the
10,000 socket case, instead of load balancing qdisc servicing, we
could perhaps modify tx queue selection in __netdev_pick_tx to
choose another queue if the the initial choice is paused or otherwise
backlogged.
Powered by blists - more mailing lists