[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z_d_8fyQzGuwzbIv@localhost.localdomain>
Date: Thu, 10 Apr 2025 10:23:13 +0200
From: Oscar Salvador <osalvador@...e.de>
To: Gavin Shan <gshan@...hat.com>
Cc: Aditya Gupta <adityag@...ux.ibm.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Danilo Krummrich <dakr@...nel.org>,
David Hildenbrand <david@...hat.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Mahesh J Salgaonkar <mahesh@...ux.ibm.com>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Sourabh Jain <sourabhjain@...ux.ibm.com>,
linux-kernel@...r.kernel.org
Subject: Re: [REPORT] Softlockups on PowerNV with upstream
On Thu, Apr 10, 2025 at 03:35:19PM +1000, Gavin Shan wrote:
> Thanks, Oscar. You're correct that the overhead is introduced by for_each_present_section_nr().
> I already had the fix, working on IBM's Power9 machine, where the issue can be
> reproduced. Please see the attached patch.
>
> I'm having most tests on ARM64 machine for the fix.
Looks good to me.
But we need a comment explaining why block_id is set to ULONG_MAX
at the beginning as this might not be obvious.
Also, do we need
if (block_id != ULONG_MAX && memory_block_id(nr) == block_id) ?
Cannot just be
if (memory_block_id(nr) == block_id) ?
AFAICS, the first time we loop through 'memory_block_id(nr) == ULONG_MAX'
will evaluate false and and we will set block_id afterwards.
Either way looks fine to me.
Another way I guess would be:
--
Oscar Salvador
SUSE Labs
Powered by blists - more mailing lists