[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UeMCpjq60ULZ7H8UEw+-hkB2nqBnroDQc8GGu56CY-83Q@mail.gmail.com>
Date: Thu, 25 Jan 2018 07:07:27 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: Peter Manev <petermanev@...il.com>
Cc: John Fastabend <john.fastabend@...il.com>,
David Miller <davem@...emloft.net>,
Eric Leblond <eric@...it.org>, Netdev <netdev@...r.kernel.org>,
xdp-newbies@...r.kernel.org,
Emil Tantilov <emil.s.tantilov@...el.com>
Subject: Re: ixgbe tuning reset when XDP is setup
On Thu, Jan 25, 2018 at 5:09 AM, Peter Manev <petermanev@...il.com> wrote:
> On Fri, Dec 15, 2017 at 5:56 PM, Peter Manev <petermanev@...il.com> wrote:
>>
>>> On 15 Dec 2017, at 17:51, Alexander Duyck <alexander.duyck@...il.com> wrote:
>>>
>>> On Fri, Dec 15, 2017 at 8:03 AM, John Fastabend
>>> <john.fastabend@...il.com> wrote:
>>>> On 12/15/2017 07:53 AM, David Miller wrote:
>>>>> From: Eric Leblond <eric@...it.org>
>>>>> Date: Fri, 15 Dec 2017 11:24:46 +0100
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> When using an ixgbe card with Suricata we are using the following
>>>>>> commands to get a symmetric hash on RSS load balancing:
>>>>>>
>>>>>> ./set_irq_affinity 0-15 eth3
>>>>>> ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
>>>>>> ethtool -x eth3
>>>>>> ethtool -n eth3
>>>>>>
>>>>>> Then we start Suricata.
>>>>>>
>>>>>> In my current experiment on XDP, I have Suricata that inject the eBPF
>>>>>> program when starting. The consequence of that when using an ixgbe card
>>>>>> is that the load balancing get reset and all interrupts are reaching
>>>>>> the first core.
>>>>>
>>>>> This definitely should _not_ be a side effect of enabling XDP on a device.
>>>>>
>>>>
>>>> Agreed, CC Emil and Alex we should restore these settings after the
>>>> reconfiguration done to support a queue per core.
>>>>
>>>> .John
>>>
>>> So the interrupt configuration has to get reset since we have to
>>> assign 2 Tx queues for every Rx queue instead of the 1-1 that was
>>> previously there. That is a natural consequence of rearranging the
>>> queues as currently happens. The issue is the q_vectors themselves
>>> have to be reallocated. The only way to not make that happen would be
>>> to pre-allocate the Tx queues for XDP always.
>>>
>>> Also just to be clear we are talking about the interrupts being reset,
>>> not the RSS key right? I just want to make sure that is what we are
>>> talking about.
>>>
>>
>> Yes.
>> From the tests we did I only observed the IRQs being all reset to the first CPU after Suricata started.
>>
>>
>>
>>> Thanks.
>>>
>>> - Alex
>
> Hi,
>
> We were wondering if there is any follow up/potential solution for that?
> If there is something we could help out testing with regards to that
> - please let us know.
>
> Thank you
>
> --
> Regards,
> Peter Manev
We don't have a solution available for this yet. Basically what it
comes down to is that we have to change the driver code so that if
assumes it is going to need to alloc Tx rings for XDP always, and then
if it can't we have to disable the XDP feature. The current logic is
to advertise the XDP feature, and then allocate the rings when XDP is
actually used, and if that fails we fail to load the XDP program.
Unfortunately I don't have an ETA for when we can get to that. It may
be a while, however patches are always welcome.
Thanks.
- Alex
Powered by blists - more mailing lists