lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 17 Jan 2024 10:26:25 +0100
From: Antoine Tenart <atenart@...nel.org>
To: Jenishkumar Maheshbhai Patel <jpatel2@...vell.com>, davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org, linux-kernel@...r.kernel.org, linux@...linux.org.uk, marcin.s.wojtas@...il.com, netdev@...r.kernel.org, pabeni@...hat.com
Cc: Jenishkumar Maheshbhai Patel <jpatel2@...vell.com>
Subject: Re: [net v2 PATCH 1/1] net: mvpp2: clear BM pool before initialization

Hello,

Quoting Jenishkumar Maheshbhai Patel (2024-01-17 07:23:10)
> +/* Cleanup pool before actual initialization in the OS */
> +static void mvpp2_bm_pool_cleanup(struct mvpp2 *priv, int pool_id)
> +{
> +       u32 val;
> +       int i;

Please add an empty line here. (You might as well add some below to
improve readability).

> +       /* Drain the BM from all possible residues left by firmware */
> +       for (i = 0; i < MVPP2_BM_POOL_SIZE_MAX; i++)
> +               mvpp2_read(priv, MVPP2_BM_PHY_ALLOC_REG(pool_id));

Not sure about the above, but I don't have the datasheet. Looks like
MVPP2_BM_PHY_ALLOC_REG contains the buffer dma addr, and is read
multiple times in a loop. Also the driver's comments says:

"""
- global registers that must be accessed through a specific thread
  window, because they are related to an access to a per-thread
  register

  MVPP2_BM_PHY_ALLOC_REG    (related to MVPP2_BM_VIRT_ALLOC_REG)
"""

If that's intended, maybe add a comment about what this does and why
mvpp2_thread_read isn't used?

> +       /* Stop the BM pool */
> +       val = mvpp2_read(priv, MVPP2_BM_POOL_CTRL_REG(pool_id));
> +       val |= MVPP2_BM_STOP_MASK;
> +       mvpp2_write(priv, MVPP2_BM_POOL_CTRL_REG(pool_id), val);
> +       /* Mask BM all interrupts */
> +       mvpp2_write(priv, MVPP2_BM_INTR_MASK_REG(pool_id), 0);
> +       /* Clear BM cause register */
> +       mvpp2_write(priv, MVPP2_BM_INTR_CAUSE_REG(pool_id), 0);
> +}
> +
>  static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)
>  {
>         enum dma_data_direction dma_dir = DMA_FROM_DEVICE;
>         int i, err, poolnum = MVPP2_BM_POOLS_NUM;
>         struct mvpp2_port *port;
>  
> +       if (priv->percpu_pools)
> +               poolnum = mvpp2_get_nrxqs(priv) * 2;

Since poolnum is now set here, you can remove the one below in the same
function (not shown in the context).

> +
> +       /* Clean up the pool state in case it contains stale state */
> +       for (i = 0; i < poolnum; i++)
> +               mvpp2_bm_pool_cleanup(priv, i);
> +
>         if (priv->percpu_pools) {
>                 for (i = 0; i < priv->port_count; i++) {
>                         port = priv->port_list[i];

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ