lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Nov 2016 10:59:25 +0100
From:   Marcin Wojtas <mw@...ihalf.com>
To:     Gregory CLEMENT <gregory.clement@...e-electrons.com>
Cc:     "David S. Miller" <davem@...emloft.net>,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
        Jisheng Zhang <jszhang@...vell.com>,
        Arnd Bergmann <arnd@...db.de>,
        Jason Cooper <jason@...edaemon.net>,
        Andrew Lunn <andrew@...n.ch>,
        Sebastian Hesselbarth <sebastian.hesselbarth@...il.com>,
        Thomas Petazzoni <thomas.petazzoni@...e-electrons.com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        Nadav Haklai <nadavh@...vell.com>,
        Dmitri Epshtein <dima@...vell.com>,
        Yelena Krivosheev <yelena@...vell.com>
Subject: Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to
 store the rx buffer virtual address

Hi Gregory,

Another remark below, sorry for noise.

2016-11-29 10:37 GMT+01:00 Gregory CLEMENT <gregory.clement@...e-electrons.com>:
> Until now the virtual address of the received buffer were stored in the
> cookie field of the rx descriptor. However, this field is 32-bits only
> which prevents to use the driver on a 64-bits architecture.
>
> With this patch the virtual address is stored in an array not shared with
> the hardware (no more need to use the DMA API). Thanks to this, it is
> possible to use cache contrary to the access of the rx descriptor member.
>
> The change is done in the swbm path only because the hwbm uses the cookie
> field, this also means that currently the hwbm is not usable in 64-bits.
>
> Signed-off-by: Gregory CLEMENT <gregory.clement@...e-electrons.com>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++----
>  1 file changed, 81 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 1b84f746d748..32b142d0e44e 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -561,6 +561,9 @@ struct mvneta_rx_queue {
>         u32 pkts_coal;
>         u32 time_coal;
>
> +       /* Virtual address of the RX buffer */
> +       void  **buf_virt_addr;
> +
>         /* Virtual address of the RX DMA descriptors array */
>         struct mvneta_rx_desc *descs;
>
> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp,
>
>  /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */
>  static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc,
> -                               u32 phys_addr, u32 cookie)
> +                               u32 phys_addr, void *virt_addr,
> +                               struct mvneta_rx_queue *rxq)
>  {
> -       rx_desc->buf_cookie = cookie;
> +       int i;
> +
>         rx_desc->buf_phys_addr = phys_addr;
> +       i = rx_desc - rxq->descs;
> +       rxq->buf_virt_addr[i] = virt_addr;
>  }
>
>  /* Decrement sent descriptors counter */
> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free);
>
>  /* Refill processing for SW buffer management */
>  static int mvneta_rx_refill(struct mvneta_port *pp,
> -                           struct mvneta_rx_desc *rx_desc)
> +                           struct mvneta_rx_desc *rx_desc,
> +                           struct mvneta_rx_queue *rxq)
>
>  {
>         dma_addr_t phys_addr;
> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>                 return -ENOMEM;
>         }
>
> -       mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data);
> +       mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
>         return 0;
>  }
>
> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
>
>         for (i = 0; i < rxq->size; i++) {
>                 struct mvneta_rx_desc *rx_desc = rxq->descs + i;
> -               void *data = (void *)rx_desc->buf_cookie;
> +               void *data;
> +
> +               if (!pp->bm_priv)
> +                       data = rxq->buf_virt_addr[i];
> +               else
> +                       data = (void *)(uintptr_t)rx_desc->buf_cookie;

Dropping packets for HWBM (in fact returning dropped buffers to the
pool) is done a couple of lines above. This point will never be
reached with HWBM enabled (and it's also incorrect).

Best regards,
Marcin

Powered by blists - more mailing lists