[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ8uoz3kKqxReJxsT_asecnF==QwRfbVZ81mEoMDgyxSFHO8Kg@mail.gmail.com>
Date: Tue, 17 Nov 2020 20:36:54 +0100
From: Magnus Karlsson <magnus.karlsson@...il.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: "Karlsson, Magnus" <magnus.karlsson@...el.com>,
Björn Töpel <bjorn.topel@...el.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Network Development <netdev@...r.kernel.org>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Jakub Kicinski <kuba@...nel.org>, bpf <bpf@...r.kernel.org>,
Jeff Kirsher <jeffrey.t.kirsher@...el.com>,
anthony.l.nguyen@...el.com,
"Fijalkowski, Maciej" <maciej.fijalkowski@...el.com>,
Maciej Fijalkowski <maciejromanfijalkowski@...il.com>,
intel-wired-lan <intel-wired-lan@...ts.osuosl.org>
Subject: Re: [PATCH bpf-next v3 4/5] xsk: introduce batched Tx descriptor interfaces
On Tue, Nov 17, 2020 at 8:07 PM John Fastabend <john.fastabend@...il.com> wrote:
>
> Magnus Karlsson wrote:
> > From: Magnus Karlsson <magnus.karlsson@...el.com>
> >
> > Introduce batched descriptor interfaces in the xsk core code for the
> > Tx path to be used in the driver to write a code path with higher
> > performance. This interface will be used by the i40e driver in the
> > next patch. Though other drivers would likely benefit from this new
> > interface too.
> >
> > Note that batching is only implemented for the common case when
> > there is only one socket bound to the same device and queue id. When
> > this is not the case, we fall back to the old non-batched version of
> > the function.
> >
> > Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
> > ---
> > include/net/xdp_sock_drv.h | 7 ++++
> > net/xdp/xsk.c | 57 +++++++++++++++++++++++++++++
> > net/xdp/xsk_queue.h | 89 +++++++++++++++++++++++++++++++++++++++-------
> > 3 files changed, 140 insertions(+), 13 deletions(-)
> >
>
> Acked-by: John Fastabend <john.fastabend@...il.com>
>
> > +
> > +u32 xsk_tx_peek_release_desc_batch(struct xsk_buff_pool *pool, struct xdp_desc *descs,
> > + u32 max_entries)
> > +{
> > + struct xdp_sock *xs;
> > + u32 nb_pkts;
> > +
> > + rcu_read_lock();
> > + if (!list_is_singular(&pool->xsk_tx_list)) {
> > + /* Fallback to the non-batched version */
>
> I'm going to ask even though I believe its correct.
>
> If we fallback here and then an entry is added to the list while we are
> in the fallback logic everything should still be OK, correct?
Yes, the fallback function can handle all cases.
> > + rcu_read_unlock();
> > + return xsk_tx_peek_release_fallback(pool, descs, max_entries);
> > + }
> > +
> > + xs = list_first_or_null_rcu(&pool->xsk_tx_list, struct xdp_sock, tx_list);
> > + if (!xs) {
> > + nb_pkts = 0;
> > + goto out;
> > + }
> > +
> > + nb_pkts = xskq_cons_peek_desc_batch(xs->tx, descs, pool, max_entries);
> > + if (!nb_pkts) {
> > + xs->tx->queue_empty_descs++;
> > + goto out;
> > + }
> > +
> > + /* This is the backpressure mechanism for the Tx path. Try to
> > + * reserve space in the completion queue for all packets, but
> > + * if there are fewer slots available, just process that many
> > + * packets. This avoids having to implement any buffering in
> > + * the Tx path.
> > + */
> > + nb_pkts = xskq_prod_reserve_addr_batch(pool->cq, descs, nb_pkts);
> > + if (!nb_pkts)
> > + goto out;
> > +
> > + xskq_cons_release_n(xs->tx, nb_pkts);
> > + __xskq_cons_release(xs->tx);
> > + xs->sk.sk_write_space(&xs->sk);
> > +
> > +out:
> > + rcu_read_unlock();
> > + return nb_pkts;
> > +}
> > +EXPORT_SYMBOL(xsk_tx_peek_release_desc_batch);
> > +
Powered by blists - more mailing lists