[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3976e7a9-b6a2-450c-a891-483644ee88ba@linux.ibm.com>
Date: Wed, 17 Jul 2024 09:59:12 +0200
From: Christian Borntraeger <borntraeger@...ux.ibm.com>
To: Janosch Frank <frankja@...ux.ibm.com>, Yu Zhao <yuzhao@...gle.com>
Cc: oe-lkp@...ts.linux.dev, lkp@...el.com,
Linux Memory Management List <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Muchun Song <muchun.song@...ux.dev>,
David Hildenbrand <david@...hat.com>,
Frank van der Linden <fvdl@...gle.com>,
Matthew Wilcox
<willy@...radead.org>, Peter Xu <peterx@...hat.com>,
Yang Shi <yang@...amperecomputing.com>, linux-kernel@...r.kernel.org,
ying.huang@...el.com, feng.tang@...el.com, fengwei.yin@...el.com,
Claudio Imbrenda <imbrenda@...ux.ibm.com>,
Marc Hartmayer <mhartmay@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>
Subject: Re: [linux-next:master] [mm/hugetlb_vmemmap] 875fa64577:
vm-scalability.throughput -34.3% regression
Am 17.07.24 um 09:52 schrieb Janosch Frank:
> On 7/9/24 07:11, kernel test robot wrote:
>> Hello,
>>
>> kernel test robot noticed a -34.3% regression of vm-scalability.throughput on:
>>
>>
>> commit: 875fa64577da9bc8e9963ee14fef8433f20653e7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers")
>> https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
>>
>> [still regression on linux-next/master 0b58e108042b0ed28a71cd7edf5175999955b233]
>>
> This has hit s390 huge page backed KVM guests as well.
> Our simple start/stop test case went from ~5 to over 50 seconds of runtime.
Could this be one of the synchronize_rcu calls? This patch adds lots of them. On s390 with HZ=100 those are really expensive.
Powered by blists - more mailing lists