[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5cc06269-5a6e-4874-bf68-fa4790f22bc2@redhat.com>
Date: Tue, 28 Oct 2025 17:14:35 -0400
From: Luiz Capitulino <luizcap@...hat.com>
To: Heiko Carstens <hca@...ux.ibm.com>
Cc: Joao Martins <joao.m.martins@...cle.com>, osalvador@...e.de,
akpm@...ux-foundation.org, david@...hat.com, aneesh.kumar@...nel.org,
borntraeger@...ux.ibm.com, mike.kravetz@...cle.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-s390@...r.kernel.org, Vasily Gorbik <gor@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>
Subject: Re: [PATCH] mm: hugetlb: fix HVO crash on s390
On 2025-10-28 15:37, Heiko Carstens wrote:
> On Tue, Oct 28, 2025 at 01:15:57PM -0400, Luiz Capitulino wrote:
>>>> flush_tlb_all() however is the *closest* equivalent to this that's behind an
>>>> arch generic API i.e. flushing kernel address space on all CPUs TLBs. IIUC, x86
>>>> when doing flush_tlb_kernel_range with enough pages it switches to flush_tlb_all
>>>> (these days on modern AMDs it's even one instruction solely in the calling CPU).
>>>
>>> Considering that flush_tlb_all() should be mapped to __tlb_flush_global()
>>> and not __tlb_flush_kernel() on s390.
>>
>> You're right.
>>
>>> However if there is only a need to flush tlb entries for the complete(?)
>>> kernel address space, then I'd rather propose a new tlb_flush_kernel()
>>> instead of a big hammer. If I'm not mistaken flush_tlb_kernel_range()
>>> exists for just avoiding that. And if architectures can avoid a global
>>> flush of _all_ tlb entries then that should be made possible.
>>
>> Should we take a v2 doing your suggestion above for now and work on
>> the tlb_flush_kernel() idea as a follow up improvement? At least we
>> go from crashing to flushing more than we should...
>
> That's of course fine. I guess for stable backports a small fix is the
> best way forward anyway.
Exactly. I'll also see if I can find time to explore your API
improvement suggestion. I'll send v2 shortly.
Powered by blists - more mailing lists