[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZsUYRRaKLmM5S5K9@apollo.purestorage.com>
Date: Tue, 20 Aug 2024 15:27:17 -0700
From: Mohamed Khalfella <mkhalfella@...estorage.com>
To: Przemek Kitszel <przemyslaw.kitszel@...el.com>
Cc: Yuanyuan Zhong <yzhong@...estorage.com>,
Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>, Tariq Toukan <tariqt@...dia.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Shay Drori <shayd@...dia.com>, Moshe Shemesh <moshe@...dia.com>,
netdev@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] net/mlx5: Added cond_resched() to crdump collection
On 2024-08-20 12:09:37 +0200, Przemek Kitszel wrote:
> On 8/19/24 23:42, Mohamed Khalfella wrote:
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> > index d0b595ba6110..377cc39643b4 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/pci_vsc.c
> > @@ -191,6 +191,7 @@ static int mlx5_vsc_wait_on_flag(struct mlx5_core_dev *dev, u8 expected_val)
> > if ((retries & 0xf) == 0)
> > usleep_range(1000, 2000);
> >
> > + cond_resched();
>
> the sleeping logic above (including what is out of git diff context) is
> a bit weird (tight loop with a sleep after each 16 attempts, with an
> upper bound of 2k attempts!)
>
> My understanding of usleep_range() is that it puts process to sleep
> (and even leads to sched() call).
> So cond_resched() looks redundant here.
This matches my understanding too. usleep_range() should put the thread
to sleep, effectively releasing the cpu to do other work. The reason I
put cond_resched() here is that pci_read_config_dword() might take long
time when that card sees fatal errors. I was not able to reproduce this
so I am okay with removing this cond_resched().
>
> > } while (flag != expected_val);
> >
> > return 0;
> > @@ -280,6 +281,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
> > return read_addr;
> >
> > read_addr = next_read_addr;
> > + cond_resched();
>
> Would be great to see how many registers there are/how long it takes to
> dump them in commit message.
> My guess is that a single mlx5_vsc_gw_read_fast() call is very short and
> there are many. With that cond_resched() should be rather under some
I did some testing on ConnectX-5 Ex MCX516A-CDAT and here is what I saw:
- mlx5_vsc_gw_read_block_fast() was called with length = 1310716
- mlx5_vsc_gw_read_fast() does 4 bytes at a time but the did not read
full 1310716 bytes. Instead it was called 53813 times only. There are
jumps in read_addr.
- On average mlx5_vsc_gw_read_fast() took 35284.4ns
- In total mlx5_vsc_wait_on_flag() called vsc_read() 54707 times with
average runtime of 17548.3ns for each call. In some instances vsc_read()
was called more than once until mlx5_vsc_wait_on_flag() returned. Mostly
one time, but I saw 5, 8, and in one instance 16 times. As expected,
the thread released the cpu after 16 iterations.
- Total time to read the dump was 35284.4ns * 53813 ~= 1.898s
> if (iterator % XXX == 0) condition.
Putting a cond_resched() every 16 register reads, similar to
mlx5_vsc_wait_on_flag(), should be okay. With the numbers above, this
will result in cond_resched() every ~0.56ms, which is okay IMO.
Powered by blists - more mailing lists