[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f2f2a6bd-5cb4-46c9-a0f8-3240670094b5@amazon.com>
Date: Thu, 22 Jan 2026 18:04:51 +0000
From: Nikita Kalyazin <kalyazin@...zon.com>
To: Ackerley Tng <ackerleytng@...gle.com>, "Kalyazin, Nikita"
<kalyazin@...zon.co.uk>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "kvmarm@...ts.linux.dev"
<kvmarm@...ts.linux.dev>, "linux-fsdevel@...r.kernel.org"
<linux-fsdevel@...r.kernel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"kernel@...0n.name" <kernel@...0n.name>, "linux-riscv@...ts.infradead.org"
<linux-riscv@...ts.infradead.org>, "linux-s390@...r.kernel.org"
<linux-s390@...r.kernel.org>, "loongarch@...ts.linux.dev"
<loongarch@...ts.linux.dev>
CC: "pbonzini@...hat.com" <pbonzini@...hat.com>, "corbet@....net"
<corbet@....net>, "maz@...nel.org" <maz@...nel.org>, "oupton@...nel.org"
<oupton@...nel.org>, "joey.gouly@....com" <joey.gouly@....com>,
"suzuki.poulose@....com" <suzuki.poulose@....com>, "yuzenghui@...wei.com"
<yuzenghui@...wei.com>, "catalin.marinas@....com" <catalin.marinas@....com>,
"will@...nel.org" <will@...nel.org>, "seanjc@...gle.com" <seanjc@...gle.com>,
"tglx@...utronix.de" <tglx@...utronix.de>, "mingo@...hat.com"
<mingo@...hat.com>, "bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>, "x86@...nel.org"
<x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>, "luto@...nel.org"
<luto@...nel.org>, "peterz@...radead.org" <peterz@...radead.org>,
"willy@...radead.org" <willy@...radead.org>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>, "david@...nel.org" <david@...nel.org>,
"lorenzo.stoakes@...cle.com" <lorenzo.stoakes@...cle.com>,
"Liam.Howlett@...cle.com" <Liam.Howlett@...cle.com>, "vbabka@...e.cz"
<vbabka@...e.cz>, "rppt@...nel.org" <rppt@...nel.org>, "surenb@...gle.com"
<surenb@...gle.com>, "mhocko@...e.com" <mhocko@...e.com>, "ast@...nel.org"
<ast@...nel.org>, "daniel@...earbox.net" <daniel@...earbox.net>,
"andrii@...nel.org" <andrii@...nel.org>, "martin.lau@...ux.dev"
<martin.lau@...ux.dev>, "eddyz87@...il.com" <eddyz87@...il.com>,
"song@...nel.org" <song@...nel.org>, "yonghong.song@...ux.dev"
<yonghong.song@...ux.dev>, "john.fastabend@...il.com"
<john.fastabend@...il.com>, "kpsingh@...nel.org" <kpsingh@...nel.org>,
"sdf@...ichev.me" <sdf@...ichev.me>, "haoluo@...gle.com" <haoluo@...gle.com>,
"jolsa@...nel.org" <jolsa@...nel.org>, "jgg@...pe.ca" <jgg@...pe.ca>,
"jhubbard@...dia.com" <jhubbard@...dia.com>, "peterx@...hat.com"
<peterx@...hat.com>, "jannh@...gle.com" <jannh@...gle.com>,
"pfalcato@...e.de" <pfalcato@...e.de>, "shuah@...nel.org" <shuah@...nel.org>,
"riel@...riel.com" <riel@...riel.com>, "ryan.roberts@....com"
<ryan.roberts@....com>, "jgross@...e.com" <jgross@...e.com>,
"yu-cheng.yu@...el.com" <yu-cheng.yu@...el.com>, "kas@...nel.org"
<kas@...nel.org>, "coxu@...hat.com" <coxu@...hat.com>,
"kevin.brodsky@....com" <kevin.brodsky@....com>, "maobibo@...ngson.cn"
<maobibo@...ngson.cn>, "prsampat@....com" <prsampat@....com>,
"mlevitsk@...hat.com" <mlevitsk@...hat.com>, "jmattson@...gle.com"
<jmattson@...gle.com>, "jthoughton@...gle.com" <jthoughton@...gle.com>,
"agordeev@...ux.ibm.com" <agordeev@...ux.ibm.com>, "alex@...ti.fr"
<alex@...ti.fr>, "aou@...s.berkeley.edu" <aou@...s.berkeley.edu>,
"borntraeger@...ux.ibm.com" <borntraeger@...ux.ibm.com>,
"chenhuacai@...nel.org" <chenhuacai@...nel.org>, "dev.jain@....com"
<dev.jain@....com>, "gor@...ux.ibm.com" <gor@...ux.ibm.com>,
"hca@...ux.ibm.com" <hca@...ux.ibm.com>, "Jonathan.Cameron@...wei.com"
<Jonathan.Cameron@...wei.com>, "palmer@...belt.com" <palmer@...belt.com>,
"pjw@...nel.org" <pjw@...nel.org>, "shijie@...amperecomputing.com"
<shijie@...amperecomputing.com>, "svens@...ux.ibm.com" <svens@...ux.ibm.com>,
"thuth@...hat.com" <thuth@...hat.com>, "wyihan@...gle.com"
<wyihan@...gle.com>, "yang@...amperecomputing.com"
<yang@...amperecomputing.com>, "vannapurve@...gle.com"
<vannapurve@...gle.com>, "jackmanb@...gle.com" <jackmanb@...gle.com>,
"aneesh.kumar@...nel.org" <aneesh.kumar@...nel.org>, "patrick.roy@...ux.dev"
<patrick.roy@...ux.dev>, "Thomson, Jack" <jackabt@...zon.co.uk>, "Itazuri,
Takahiro" <itazur@...zon.co.uk>, "Manwaring, Derek" <derekmn@...zon.com>,
"Cali, Marco" <xmarcalx@...zon.co.uk>
Subject: Re: [PATCH v9 07/13] KVM: guest_memfd: Add flag to remove from direct
map
On 22/01/2026 16:34, Ackerley Tng wrote:
> Nikita Kalyazin <kalyazin@...zon.com> writes:
>
> Was preparing the reply but couldn't get to it before the
> meeting. Here's what was also discussed at the guest_memfd biweekly on
> 2026-01-22:
>
>>
>> [...snip...]
>>
>>>> @@ -423,6 +464,12 @@ static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf)
>>>> kvm_gmem_mark_prepared(folio);
>>>> }
>>>>
>>>> + err = kvm_gmem_folio_zap_direct_map(folio);
>>>
>>> Perhaps the check for gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP should
>>> be done here before making the call to kvm_gmem_folio_zap_direct_map()
>>> to make it more obvious that zapping is conditional.
>>
>> Makes sense to me.
>>
>>>
>>> Perhaps also add a check for kvm_arch_gmem_supports_no_direct_map() so
>>> this call can be completely removed by the compiler if it wasn't
>>> compiled in.
>>
>> But if it is compiled in, we will be paying the cost of the call on
>> every page fault? Eg on arm64, it will call the following:
>>
>> bool can_set_direct_map(void)
>> {
>>
>> ...
>>
>> return rodata_full || debug_pagealloc_enabled() ||
>> arm64_kfence_can_set_direct_map() || is_realm_world();
>> }
>>
>
> You're right that this could end up paying the cost on every page
> fault. Please ignore this request!
>
>>>
>>> The kvm_gmem_folio_no_direct_map() check should probably remain in
>>> kvm_gmem_folio_zap_direct_map() since that's a "if already zapped, don't
>>> zap again" check.
>>>
>>>> + if (err) {
>>>> + ret = vmf_error(err);
>>>> + goto out_folio;
>>>> + }
>>>> +
>>>> vmf->page = folio_file_page(folio, vmf->pgoff);
>>>>
>>>> out_folio:
>>>> @@ -533,6 +580,8 @@ static void kvm_gmem_free_folio(struct folio *folio)
>>>> kvm_pfn_t pfn = page_to_pfn(page);
>>>> int order = folio_order(folio);
>>>>
>>>> + kvm_gmem_folio_restore_direct_map(folio);
>>>> +
>>>
>>> I can't decide if the kvm_gmem_folio_no_direct_map(folio) should be in
>>> the caller or within kvm_gmem_folio_restore_direct_map(), since this
>>> time it's a folio-specific property being checked.
>>
>> I'm tempted to keep it similar to the kvm_gmem_folio_zap_direct_map()
>> case. How does the fact it's a folio-speicific property change your
>> reasoning?
>>
>
> This is good too:
>
> if (kvm_gmem_folio_no_direct_map(folio))
> kvm_gmem_folio_restore_direct_map(folio)
It turns out we can't do that because folio->mapping is gone by the time
filemap_free_folio() is called so we can't inspect the flags. Are you
ok with only having this check when zapping (but not when restoring)?
Do you think we should add a comment saying it's conditional here?
>
>>>
>>> Perhaps also add a check for kvm_arch_gmem_supports_no_direct_map() so
>>> this call can be completely removed by the compiler if it wasn't
>>> compiled in. IIUC whether the check is added in the caller or within
>>> kvm_gmem_folio_restore_direct_map() the call can still be elided.
>>
>> Same concern as the above about kvm_gmem_folio_zap_direct_map(), ie the
>> performance of the case where kvm_arch_gmem_supports_no_direct_map() exists.
>>
>
> Please ignore this request!
>
>>>
>>>> kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order));
>>>> }
>>>>
>>>> @@ -596,6 +645,9 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>>>> /* Unmovable mappings are supposed to be marked unevictable as well. */
>>>> WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
>>>>
>>>> + if (flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP)
>>>> + mapping_set_no_direct_map(inode->i_mapping);
>>>> +
>>>> GMEM_I(inode)->flags = flags;
>>>>
>>>> file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, &kvm_gmem_fops);
>>>> @@ -807,6 +859,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>>>> if (!is_prepared)
>>>> r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio);
>>>>
>>>> + kvm_gmem_folio_zap_direct_map(folio);
>>>> +
>>>
>>> Is there a reason why errors are not handled when faulting private memory?
>>
>> No, I can't see a reason. Will add a check, thanks.
>>
>>>
>>>> folio_unlock(folio);
>>>>
>>>> if (!r)
>>>> --
>>>> 2.50.1
Powered by blists - more mailing lists