lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <bc93c396-78b1-491c-8857-41114aa585d7@gmail.com>
Date: Mon, 15 Dec 2025 11:43:06 +0000
From: "Thomson, Jack" <jackabt.amazon@...il.com>
To: Marc Zyngier <maz@...nel.org>
Cc: oliver.upton@...ux.dev, pbonzini@...hat.com, joey.gouly@....com,
 suzuki.poulose@....com, yuzenghui@...wei.com, catalin.marinas@....com,
 will@...nel.org, shuah@...nel.org, linux-arm-kernel@...ts.infradead.org,
 kvmarm@...ts.linux.dev, linux-kernel@...r.kernel.org,
 linux-kselftest@...r.kernel.org, isaku.yamahata@...el.com,
 xmarcalx@...zon.co.uk, kalyazin@...zon.co.uk, jackabt@...zon.com
Subject: Re: [PATCH v3 1/3] KVM: arm64: Add pre_fault_memory implementation



On 24/11/2025 11:34 am, Marc Zyngier wrote:
> On Wed, 19 Nov 2025 15:49:08 +0000,
> Jack Thomson <jackabt.amazon@...il.com> wrote:
>>
>> From: Jack Thomson <jackabt@...zon.com>
>>
>> Add kvm_arch_vcpu_pre_fault_memory() for arm64. The implementation hands
>> off the stage-2 faulting logic to either gmem_abort() or
>> user_mem_abort().
>>
>> Add an optional page_size output parameter to user_mem_abort() to
>> return the VMA page size, which is needed when pre-faulting.
>>
>> Update the documentation to clarify x86 specific behaviour.
>>
>> Signed-off-by: Jack Thomson <jackabt@...zon.com>
>> ---
>>   Documentation/virt/kvm/api.rst |  3 +-
>>   arch/arm64/kvm/Kconfig         |  1 +
>>   arch/arm64/kvm/arm.c           |  1 +
>>   arch/arm64/kvm/mmu.c           | 73 ++++++++++++++++++++++++++++++++--
>>   4 files changed, 73 insertions(+), 5 deletions(-)
>>
>> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
>> index 57061fa29e6a..30872d080511 100644
>> --- a/Documentation/virt/kvm/api.rst
>> +++ b/Documentation/virt/kvm/api.rst
>> @@ -6493,7 +6493,8 @@ Errors:
>>   KVM_PRE_FAULT_MEMORY populates KVM's stage-2 page tables used to map memory
>>   for the current vCPU state.  KVM maps memory as if the vCPU generated a
>>   stage-2 read page fault, e.g. faults in memory as needed, but doesn't break
>> -CoW.  However, KVM does not mark any newly created stage-2 PTE as Accessed.
>> +CoW.  However, on x86, KVM does not mark any newly created stage-2 PTE as
>> +Accessed.
>>   
>>   In the case of confidential VM types where there is an initial set up of
>>   private guest memory before the guest is 'finalized'/measured, this ioctl
>> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
>> index 4f803fd1c99a..6872aaabe16c 100644
>> --- a/arch/arm64/kvm/Kconfig
>> +++ b/arch/arm64/kvm/Kconfig
>> @@ -25,6 +25,7 @@ menuconfig KVM
>>   	select HAVE_KVM_CPU_RELAX_INTERCEPT
>>   	select KVM_MMIO
>>   	select KVM_GENERIC_DIRTYLOG_READ_PROTECT
>> +	select KVM_GENERIC_PRE_FAULT_MEMORY
>>   	select VIRT_XFER_TO_GUEST_WORK
>>   	select KVM_VFIO
>>   	select HAVE_KVM_DIRTY_RING_ACQ_REL
>> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
>> index 870953b4a8a7..88c5dc2b4ee8 100644
>> --- a/arch/arm64/kvm/arm.c
>> +++ b/arch/arm64/kvm/arm.c
>> @@ -327,6 +327,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>   	case KVM_CAP_IRQFD_RESAMPLE:
>>   	case KVM_CAP_COUNTER_OFFSET:
>>   	case KVM_CAP_ARM_WRITABLE_IMP_ID_REGS:
>> +	case KVM_CAP_PRE_FAULT_MEMORY:
>>   		r = 1;
> 
> How does with pKVM, where the host is not in charge of dealing with
> stage-2?
For the pKVM case would
     if (vcpu_is_protected(vcpu))
         return -EPERM;
be the appropriate way to handle this?
>>   		break;
>>   	case KVM_CAP_SET_GUEST_DEBUG2:
>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>> index 7cc964af8d30..cba09168fc6d 100644
>> --- a/arch/arm64/kvm/mmu.c
>> +++ b/arch/arm64/kvm/mmu.c
>> @@ -1599,8 +1599,8 @@ static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   
>>   static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   			  struct kvm_s2_trans *nested,
>> -			  struct kvm_memory_slot *memslot, unsigned long hva,
>> -			  bool fault_is_perm)
>> +			  struct kvm_memory_slot *memslot, long *page_size,
> 
> Why is page_size a signed type? A page size is never negative.
> 
>> +			  unsigned long hva, bool fault_is_perm)
> 
> I really wish we'd stop adding parameters to this function, as it has
> long stopped being readable. It would make a lot more sense if we
> passed a descriptor for the fault, containing the ipa, hva, memslot
> and fault type.
I found a patch series which looks to address this [1]. Would you like
this fixed for this series?

[1] 
https://lore.kernel.org/linux-arm-kernel/20250821210042.3451147-1-seanjc@google.com/
>>   {
>>   	int ret = 0;
>>   	bool topup_memcache;
>> @@ -1878,6 +1878,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>   	kvm_release_faultin_page(kvm, page, !!ret, writable);
>>   	kvm_fault_unlock(kvm);
>>   
>> +	if (page_size)
>> +		*page_size = vma_pagesize;
>> +
>>   	/* Mark the page dirty only if the fault is handled successfully */
>>   	if (writable && !ret)
>>   		mark_page_dirty_in_slot(kvm, memslot, gfn);
>> @@ -2080,8 +2083,8 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>>   		ret = gmem_abort(vcpu, fault_ipa, nested, memslot,
>>   				 esr_fsc_is_permission_fault(esr));
>>   	else
>> -		ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
>> -				     esr_fsc_is_permission_fault(esr));
>> +		ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, NULL,
>> +				     hva, esr_fsc_is_permission_fault(esr));
>>   	if (ret == 0)
>>   		ret = 1;
>>   out:
>> @@ -2457,3 +2460,65 @@ void kvm_toggle_cache(struct kvm_vcpu *vcpu, bool was_enabled)
>>   
>>   	trace_kvm_toggle_cache(*vcpu_pc(vcpu), was_enabled, now_enabled);
>>   }
>> +
>> +long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
>> +				    struct kvm_pre_fault_memory *range)
>> +{
>> +	int ret, idx;
>> +	hva_t hva;
>> +	phys_addr_t end;
>> +	struct kvm_memory_slot *memslot;
>> +	struct kvm_vcpu_fault_info stored_fault, *fault_info;
>> +
>> +	long page_size = PAGE_SIZE;
>> +	phys_addr_t ipa = range->gpa;
>> +	gfn_t gfn = gpa_to_gfn(range->gpa);
> 
> nit: Please order this in a more readable way, preferably with long
> line first.
> 
>> +
>> +	idx = srcu_read_lock(&vcpu->kvm->srcu);
> 
> ??? Aren't we already guaranteed to be under the SRCU read lock?
> 
>> +
>> +	if (ipa >= kvm_phys_size(vcpu->arch.hw_mmu)) {
>> +		ret = -ENOENT;
>> +		goto out_unlock;
>> +	}
>> +
>> +	memslot = gfn_to_memslot(vcpu->kvm, gfn);
>> +	if (!memslot) {
>> +		ret = -ENOENT;
>> +		goto out_unlock;
>> +	}
>> +
>> +	fault_info = &vcpu->arch.fault;
>> +	stored_fault = *fault_info;
> 
> If this is a vcpu ioctl, can the fault information be actually valid
> while userspace is issuing an ioctl? Wouldn't that mean that we are
> exiting to userspace in the middle of handling an exception?
> 
>> +
>> +	/* Generate a synthetic abort for the pre-fault address */
>> +	fault_info->esr_el2 = ESR_ELx_EC_DABT_LOW;
>> +	fault_info->esr_el2 &= ~ESR_ELx_ISV;
> 
> You are constructing this from scratch. How can ISV be set?
> 
>> +	fault_info->esr_el2 |= ESR_ELx_FSC_FAULT_L(KVM_PGTABLE_LAST_LEVEL);
>> +
>> +	fault_info->hpfar_el2 = HPFAR_EL2_NS |
>> +		FIELD_PREP(HPFAR_EL2_FIPA, ipa >> 12);
>> +
>> +	if (kvm_slot_has_gmem(memslot)) {
>> +		ret = gmem_abort(vcpu, ipa, NULL, memslot, false);
>> +	} else {
>> +		hva = gfn_to_hva_memslot_prot(memslot, gfn, NULL);
>> +		if (kvm_is_error_hva(hva)) {
>> +			ret = -EFAULT;
>> +			goto out;
>> +		}
>> +		ret = user_mem_abort(vcpu, ipa, NULL, memslot, &page_size, hva,
>> +				     false);
>> +	}
>> +
>> +	if (ret < 0)
>> +		goto out;
>> +
>> +	end = (range->gpa & ~(page_size - 1)) + page_size;
> 
> This suspiciously looks like one of the __ALIGN_KERNEL* macros.
> 
>> +	ret = min(range->size, end - range->gpa);
>> +
>> +out:
>> +	*fault_info = stored_fault;
>> +out_unlock:
>> +	srcu_read_unlock(&vcpu->kvm->srcu, idx);
>> +	return ret;
>> +}
> 
> Thanks,
> 
> 	M.
> 

-- 
Thanks,
Jack

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ