[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <58FE5368.1000201@gmail.com>
Date: Mon, 24 Apr 2017 12:35:04 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: jeffrey.t.kirsher@...el.com, netdev@...r.kernel.org
Subject: Re: [PATCH 2/2] ixgbe: add support for XDP_TX action
On 17-04-24 12:29 PM, Jesper Dangaard Brouer wrote:
> On Sun, 23 Apr 2017 18:31:36 -0700
> John Fastabend <john.fastabend@...il.com> wrote:
>
>> +static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter,
>> + struct xdp_buff *xdp)
>> +{
>> + struct ixgbe_ring *ring = adapter->xdp_ring[smp_processor_id()];
>
> I was about to question whether is it always true that the array size
> can match the number of CPUs in the system, but I can see later in
> ixgbe_xdp_setup() that you reject XDP program if the system have more
> CPUs that MAX_XDP_QUEUES.
Yep.
[...]
>> +
>> + tx_buffer->next_to_watch = tx_desc;
>> + ring->next_to_use = i;
>> +
>> + writel(i, ring->tail);
>
> A tailptr write for every XDP_TX packet is not going be fast, but you
> already mentioned that this is not optimal yet, so I guess you are aware.
>
There is another patch on Jeff's tree to only kick the tail ptr once per
receive path invocation.
https://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git/commit/?h=dev-queue&id=24043a662d11e048de903e12bf86059844c207e2
That patch brings packet rates up to near line rate @ 64 bytes.
Thanks,
John
Powered by blists - more mailing lists