lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu, 18 Jan 2024 08:12:31 +0000
From: "Jenishkumar Patel [C]" <jpatel2@...vell.com>
To: 'Antoine Tenart' <atenart@...nel.org>,
        "davem@...emloft.net"
	<davem@...emloft.net>,
        "edumazet@...gle.com" <edumazet@...gle.com>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>,
        "linux@...linux.org.uk"
	<linux@...linux.org.uk>,
        "marcin.s.wojtas@...il.com"
	<marcin.s.wojtas@...il.com>,
        "netdev@...r.kernel.org"
	<netdev@...r.kernel.org>,
        "pabeni@...hat.com" <pabeni@...hat.com>
Subject: RE: [EXT] Re: [net v2 PATCH 1/1] net: mvpp2: clear BM pool before
 initialization



-----Original Message-----
From: Antoine Tenart <atenart@...nel.org> 
Sent: Wednesday, January 17, 2024 2:56 PM
To: Jenishkumar Patel [C] <jpatel2@...vell.com>; davem@...emloft.net; edumazet@...gle.com; kuba@...nel.org; linux-kernel@...r.kernel.org; linux@...linux.org.uk; marcin.s.wojtas@...il.com; netdev@...r.kernel.org; pabeni@...hat.com
Cc: Jenishkumar Patel [C] <jpatel2@...vell.com>
Subject: [EXT] Re: [net v2 PATCH 1/1] net: mvpp2: clear BM pool before initialization

External Email

----------------------------------------------------------------------
Hello,

Quoting Jenishkumar Maheshbhai Patel (2024-01-17 07:23:10)
> +/* Cleanup pool before actual initialization in the OS */ static void 
> +mvpp2_bm_pool_cleanup(struct mvpp2 *priv, int pool_id) {
> +       u32 val;
> +       int i;

Please add an empty line here. (You might as well add some below to improve readability).

I will address the comments in v3

> +       /* Drain the BM from all possible residues left by firmware */
> +       for (i = 0; i < MVPP2_BM_POOL_SIZE_MAX; i++)
> +               mvpp2_read(priv, MVPP2_BM_PHY_ALLOC_REG(pool_id));

Not sure about the above, but I don't have the datasheet. Looks like MVPP2_BM_PHY_ALLOC_REG contains the buffer dma addr, and is read multiple times in a loop. Also the driver's comments says:

"""
- global registers that must be accessed through a specific thread
  window, because they are related to an access to a per-thread
  register

  MVPP2_BM_PHY_ALLOC_REG    (related to MVPP2_BM_VIRT_ALLOC_REG)
"""

If that's intended, maybe add a comment about what this does and why mvpp2_thread_read isn't used?

I will address the comments in v3 and correct the API accordingly

> +       /* Stop the BM pool */
> +       val = mvpp2_read(priv, MVPP2_BM_POOL_CTRL_REG(pool_id));
> +       val |= MVPP2_BM_STOP_MASK;
> +       mvpp2_write(priv, MVPP2_BM_POOL_CTRL_REG(pool_id), val);
> +       /* Mask BM all interrupts */
> +       mvpp2_write(priv, MVPP2_BM_INTR_MASK_REG(pool_id), 0);
> +       /* Clear BM cause register */
> +       mvpp2_write(priv, MVPP2_BM_INTR_CAUSE_REG(pool_id), 0); }
> +
>  static int mvpp2_bm_init(struct device *dev, struct mvpp2 *priv)  {
>         enum dma_data_direction dma_dir = DMA_FROM_DEVICE;
>         int i, err, poolnum = MVPP2_BM_POOLS_NUM;
>         struct mvpp2_port *port;
>  
> +       if (priv->percpu_pools)
> +               poolnum = mvpp2_get_nrxqs(priv) * 2;

Since poolnum is now set here, you can remove the one below in the same function (not shown in the context).

I will address the comments in v3
> +
> +       /* Clean up the pool state in case it contains stale state */
> +       for (i = 0; i < poolnum; i++)
> +               mvpp2_bm_pool_cleanup(priv, i);
> +
>         if (priv->percpu_pools) {
>                 for (i = 0; i < priv->port_count; i++) {
>                         port = priv->port_list[i];

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ