lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ0CqmV8OJoERhYktLNP7gYDwURs97JAmbsXq2jqKHhMoHk-pg@mail.gmail.com>
Date:   Fri, 25 Sep 2020 13:29:00 +0200
From:   Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Lorenzo Bianconi <lorenzo@...nel.org>,
        Network Development <netdev@...r.kernel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Eelco Chaudron <echaudro@...hat.com>,
        thomas.petazzoni@...tlin.com
Subject: Re: [PATCH net-next] net: mvneta: try to use in-irq pp cache in mvneta_txq_bufs_free

>
> On Fri, 25 Sep 2020 12:01:32 +0200
> Lorenzo Bianconi <lorenzo@...nel.org> wrote:
>
> > Try to recycle the xdp tx buffer into the in-irq page_pool cache if
> > mvneta_txq_bufs_free is executed in the NAPI context.
>
> NACK - I don't think this is safe.  That is also why I named the
> function postfix rx_napi.  The page pool->alloc.cache is associated
> with the drivers RX-queue.  The xdp_frame's that gets freed could be
> coming from a remote driver that use page_pool. This remote drivers
> RX-queue processing can run concurrently on a different CPU, than this
> drivers TXQ-cleanup.

ack, right. What about if we do it just XDP_TX use case? Like:

if (napi && buf->type == MVNETA_TYPE_XDP_TX)
   xdp_return_frame_rx_napi(buf->xdpf);
else
   xdp_return_frame(buf->xdpf);

In this way we are sure the packet is coming from local page_pool.

>
> If you want to speedup this, I instead suggest that you add a
> xdp_return_frame_bulk API.

I will look at it

Regards,
Lorenzo


>
>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 11 +++++++----
> >  1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index 14df3aec285d..646fbf4ed638 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -1831,7 +1831,7 @@ static struct mvneta_tx_queue *mvneta_tx_done_policy(struct mvneta_port *pp,
> >  /* Free tx queue skbuffs */
> >  static void mvneta_txq_bufs_free(struct mvneta_port *pp,
> >                                struct mvneta_tx_queue *txq, int num,
> > -                              struct netdev_queue *nq)
> > +                              struct netdev_queue *nq, bool napi)
> >  {
> >       unsigned int bytes_compl = 0, pkts_compl = 0;
> >       int i;
> > @@ -1854,7 +1854,10 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp,
> >                       dev_kfree_skb_any(buf->skb);
> >               } else if (buf->type == MVNETA_TYPE_XDP_TX ||
> >                          buf->type == MVNETA_TYPE_XDP_NDO) {
> > -                     xdp_return_frame(buf->xdpf);
> > +                     if (napi)
> > +                             xdp_return_frame_rx_napi(buf->xdpf);
> > +                     else
> > +                             xdp_return_frame(buf->xdpf);
> >               }
> >       }
> >
> > @@ -1872,7 +1875,7 @@ static void mvneta_txq_done(struct mvneta_port *pp,
> >       if (!tx_done)
> >               return;
> >
> > -     mvneta_txq_bufs_free(pp, txq, tx_done, nq);
> > +     mvneta_txq_bufs_free(pp, txq, tx_done, nq, true);
> >
> >       txq->count -= tx_done;
> >
> > @@ -2859,7 +2862,7 @@ static void mvneta_txq_done_force(struct mvneta_port *pp,
> >       struct netdev_queue *nq = netdev_get_tx_queue(pp->dev, txq->id);
> >       int tx_done = txq->count;
> >
> > -     mvneta_txq_bufs_free(pp, txq, tx_done, nq);
> > +     mvneta_txq_bufs_free(pp, txq, tx_done, nq, false);
> >
> >       /* reset txq */
> >       txq->count = 0;
>
>
>
> --
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ