[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d204f259-c965-466f-bd75-bb0f767ed8f1@amazon.com>
Date: Wed, 14 Jan 2026 13:55:43 +0000
From: Nikita Kalyazin <kalyazin@...zon.com>
To: Vlastimil Babka <vbabka@...e.cz>, "Kalyazin, Nikita"
<kalyazin@...zon.co.uk>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvmarm@...ts.linux.dev" <kvmarm@...ts.linux.dev>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "bpf@...r.kernel.org"
<bpf@...r.kernel.org>, "linux-kselftest@...r.kernel.org"
<linux-kselftest@...r.kernel.org>
CC: "pbonzini@...hat.com" <pbonzini@...hat.com>, "corbet@....net"
<corbet@....net>, "maz@...nel.org" <maz@...nel.org>, "oupton@...nel.org"
<oupton@...nel.org>, "joey.gouly@....com" <joey.gouly@....com>,
"suzuki.poulose@....com" <suzuki.poulose@....com>, "yuzenghui@...wei.com"
<yuzenghui@...wei.com>, "catalin.marinas@....com" <catalin.marinas@....com>,
"will@...nel.org" <will@...nel.org>, "seanjc@...gle.com" <seanjc@...gle.com>,
"tglx@...utronix.de" <tglx@...utronix.de>, "mingo@...hat.com"
<mingo@...hat.com>, "bp@...en8.de" <bp@...en8.de>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>, "x86@...nel.org"
<x86@...nel.org>, "hpa@...or.com" <hpa@...or.com>, "luto@...nel.org"
<luto@...nel.org>, "peterz@...radead.org" <peterz@...radead.org>,
"willy@...radead.org" <willy@...radead.org>, "akpm@...ux-foundation.org"
<akpm@...ux-foundation.org>, "david@...nel.org" <david@...nel.org>,
"lorenzo.stoakes@...cle.com" <lorenzo.stoakes@...cle.com>,
"Liam.Howlett@...cle.com" <Liam.Howlett@...cle.com>, "rppt@...nel.org"
<rppt@...nel.org>, "surenb@...gle.com" <surenb@...gle.com>, "mhocko@...e.com"
<mhocko@...e.com>, "ast@...nel.org" <ast@...nel.org>, "daniel@...earbox.net"
<daniel@...earbox.net>, "andrii@...nel.org" <andrii@...nel.org>,
"martin.lau@...ux.dev" <martin.lau@...ux.dev>, "eddyz87@...il.com"
<eddyz87@...il.com>, "song@...nel.org" <song@...nel.org>,
"yonghong.song@...ux.dev" <yonghong.song@...ux.dev>,
"john.fastabend@...il.com" <john.fastabend@...il.com>, "kpsingh@...nel.org"
<kpsingh@...nel.org>, "sdf@...ichev.me" <sdf@...ichev.me>,
"haoluo@...gle.com" <haoluo@...gle.com>, "jolsa@...nel.org"
<jolsa@...nel.org>, "jgg@...pe.ca" <jgg@...pe.ca>, "jhubbard@...dia.com"
<jhubbard@...dia.com>, "peterx@...hat.com" <peterx@...hat.com>,
"jannh@...gle.com" <jannh@...gle.com>, "pfalcato@...e.de" <pfalcato@...e.de>,
"shuah@...nel.org" <shuah@...nel.org>, "riel@...riel.com" <riel@...riel.com>,
"baohua@...nel.org" <baohua@...nel.org>, "ryan.roberts@....com"
<ryan.roberts@....com>, "jgross@...e.com" <jgross@...e.com>,
"yu-cheng.yu@...el.com" <yu-cheng.yu@...el.com>, "kas@...nel.org"
<kas@...nel.org>, "coxu@...hat.com" <coxu@...hat.com>,
"kevin.brodsky@....com" <kevin.brodsky@....com>, "ackerleytng@...gle.com"
<ackerleytng@...gle.com>, "maobibo@...ngson.cn" <maobibo@...ngson.cn>,
"prsampat@....com" <prsampat@....com>, "mlevitsk@...hat.com"
<mlevitsk@...hat.com>, "isaku.yamahata@...el.com" <isaku.yamahata@...el.com>,
"jmattson@...gle.com" <jmattson@...gle.com>, "jthoughton@...gle.com"
<jthoughton@...gle.com>, "linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "vannapurve@...gle.com"
<vannapurve@...gle.com>, "jackmanb@...gle.com" <jackmanb@...gle.com>,
"aneesh.kumar@...nel.org" <aneesh.kumar@...nel.org>, "patrick.roy@...ux.dev"
<patrick.roy@...ux.dev>, "Thomson, Jack" <jackabt@...zon.co.uk>, "Itazuri,
Takahiro" <itazur@...zon.co.uk>, "Manwaring, Derek" <derekmn@...zon.com>,
"Cali, Marco" <xmarcalx@...zon.co.uk>
Subject: Re: [PATCH v8 05/13] KVM: guest_memfd: Add flag to remove from direct
map
On 08/12/2025 08:43, Vlastimil Babka wrote:
> On 12/5/25 17:58, Kalyazin, Nikita wrote:
>> +static int kvm_gmem_folio_zap_direct_map(struct folio *folio)
>> +{
>> + int r = 0;
>> + unsigned long addr = (unsigned long) folio_address(folio);
>> + u64 gmem_flags = GMEM_I(folio_inode(folio))->flags;
>> +
>> + if (kvm_gmem_folio_no_direct_map(folio) || !(gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP))
>> + goto out;
>> +
>> + r = set_direct_map_valid_noflush(folio_page(folio, 0), folio_nr_pages(folio),
>> + false);
>> +
>> + if (r)
>> + goto out;
>> +
>> + folio->private = (void *) KVM_GMEM_FOLIO_NO_DIRECT_MAP;
>
> With Dave's suggestion on patch 1/13 to have folio_zap_direct_map(), setting
> this folio->private flag wouldn't be possible between the zap and tlb flush,
> but it's not an issue to set it before the zap, right?
I can't see an issue with that. Did it in the v9.
>
>> + flush_tlb_kernel_range(addr, addr + folio_size(folio));
>> +
>> +out:
>> + return r;
>> +}
>> +
>> +static void kvm_gmem_folio_restore_direct_map(struct folio *folio)
>> +{
>> + /*
>> + * Direct map restoration cannot fail, as the only error condition
>> + * for direct map manipulation is failure to allocate page tables
>> + * when splitting huge pages, but this split would have already
>> + * happened in set_direct_map_invalid_noflush() in kvm_gmem_folio_zap_direct_map().
>> + * Thus set_direct_map_valid_noflush() here only updates prot bits.
>> + */
>> + if (kvm_gmem_folio_no_direct_map(folio))
>> + set_direct_map_valid_noflush(folio_page(folio, 0), folio_nr_pages(folio),
>> + true);
>
> I think you're missing here clearing KVM_GMEM_FOLIO_NO_DIRECT_MAP from
> folio->private, which means if there's another
> kvm_gmem_folio_zap_direct_map() call on it in the future, it will do nothing?
You're very right, thanks. Fixed in the v9.
>
>> +}
>> +
>> static inline void kvm_gmem_mark_prepared(struct folio *folio)
>> {
>> folio_mark_uptodate(folio);
>> @@ -398,6 +444,7 @@ static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf)
>> struct inode *inode = file_inode(vmf->vma->vm_file);
>> struct folio *folio;
>> vm_fault_t ret = VM_FAULT_LOCKED;
>> + int err;
>>
>> if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
>> return VM_FAULT_SIGBUS;
>> @@ -423,6 +470,12 @@ static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf)
>> kvm_gmem_mark_prepared(folio);
>> }
>>
>> + err = kvm_gmem_folio_zap_direct_map(folio);
>> + if (err) {
>> + ret = vmf_error(err);
>> + goto out_folio;
>> + }
>> +
>> vmf->page = folio_file_page(folio, vmf->pgoff);
>>
>> out_folio:
>> @@ -533,6 +586,8 @@ static void kvm_gmem_free_folio(struct folio *folio)
>> kvm_pfn_t pfn = page_to_pfn(page);
>> int order = folio_order(folio);
>>
>> + kvm_gmem_folio_restore_direct_map(folio);
>> +
>> kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order));
>> }
>>
>> @@ -596,6 +651,9 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>> /* Unmovable mappings are supposed to be marked unevictable as well. */
>> WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
>>
>> + if (flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP)
>> + mapping_set_no_direct_map(inode->i_mapping);
>> +
>> GMEM_I(inode)->flags = flags;
>>
>> file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, &kvm_gmem_fops);
>> @@ -807,6 +865,8 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>> if (!is_prepared)
>> r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio);
>>
>> + kvm_gmem_folio_zap_direct_map(folio);
>> +
>> folio_unlock(folio);
>>
>> if (!r)
>
Powered by blists - more mailing lists