[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1489337398.28631.58.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Sun, 12 Mar 2017 09:49:58 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Saeed Mahameed <saeedm@....mellanox.co.il>
Cc: Alexander Duyck <alexander.duyck@...il.com>,
Eric Dumazet <edumazet@...gle.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
"David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Tariq Toukan <tariqt@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>,
Willem de Bruijn <willemb@...gle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Brenden Blanco <bblanco@...mgrid.com>,
Alexei Starovoitov <ast@...nel.org>
Subject: Re: [PATCH v3 net-next 08/14] mlx4: use order-0 pages for RX
On Sun, 2017-03-12 at 17:49 +0200, Saeed Mahameed wrote:
> On Sun, Mar 12, 2017 at 5:29 PM, Eric Dumazet <eric.dumazet@...il.com> wrote:
> > On Sun, 2017-03-12 at 07:57 -0700, Eric Dumazet wrote:
> >
> >> Problem is XDP TX :
> >>
> >> I do not see any guarantee mlx4_en_recycle_tx_desc() runs while the NAPI
> >> RX is owned by current cpu.
> >>
> >> Since TX completion is using a different NAPI, I really do not believe
> >> we can avoid an atomic operation, like a spinlock, to protect the list
> >> of pages ( ring->page_cache )
> >
> > A quick fix for net-next would be :
> >
>
> Hi Eric, Good catch.
>
> I don't think we need to complicate with an expensive spinlock,
> we can simply fix this by not enabling interrupts on XDP TX CQ (not
> arm this CQ at all).
> and handle XDP TX CQ completion from the RX NAPI context, in a serial
> (Atomic) manner before handling RX completions themselves.
> This way locking is not required since all page cache handling is done
> from the same context (RX NAPI).
>
> This is how we do this in mlx5, and this is the best approach
> (performance wise) since we dealy XDP TX CQ completions handling
> until we really need the space they hold (On new RX packets).
SGTM, can you provide the patch for mlx4 ?
Thanks !
Powered by blists - more mailing lists