lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab2ac224-ec8f-423a-80ce-0d7b18a7a173@arm.com>
Date: Fri, 31 May 2024 17:03:54 +0100
From: Steven Price <steven.price@....com>
To: Fuad Tabba <tabba@...gle.com>
Cc: kvm@...r.kernel.org, kvmarm@...ts.linux.dev,
 Catalin Marinas <catalin.marinas@....com>, Marc Zyngier <maz@...nel.org>,
 Will Deacon <will@...nel.org>, James Morse <james.morse@....com>,
 Oliver Upton <oliver.upton@...ux.dev>,
 Suzuki K Poulose <suzuki.poulose@....com>, Zenghui Yu
 <yuzenghui@...wei.com>, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, Joey Gouly <joey.gouly@....com>,
 Alexandru Elisei <alexandru.elisei@....com>,
 Christoffer Dall <christoffer.dall@....com>, linux-coco@...ts.linux.dev,
 Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>
Subject: Re: [PATCH v2 21/43] arm64: RME: Runtime faulting of memory

On 25/04/2024 11:43, Fuad Tabba wrote:
> Hi,

Hi,

Thanks for the review. Sorry I didn't respond earlier.

> On Fri, Apr 12, 2024 at 9:44 AM Steven Price <steven.price@....com> wrote:
>>
<snip>
>> +static int private_memslot_fault(struct kvm_vcpu *vcpu,
>> +                                phys_addr_t fault_ipa,
>> +                                struct kvm_memory_slot *memslot)
>> +{
>> +       struct kvm *kvm = vcpu->kvm;
>> +       gpa_t gpa_stolen_mask = kvm_gpa_stolen_bits(kvm);
>> +       gfn_t gfn = (fault_ipa & ~gpa_stolen_mask) >> PAGE_SHIFT;
>> +       bool is_priv_gfn = !((fault_ipa & gpa_stolen_mask) == gpa_stolen_mask);
>> +       bool priv_exists = kvm_mem_is_private(kvm, gfn);
>> +       struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
>> +       int order;
>> +       kvm_pfn_t pfn;
>> +       int ret;
>> +
>> +       if (priv_exists != is_priv_gfn) {
>> +               kvm_prepare_memory_fault_exit(vcpu,
>> +                                             fault_ipa & ~gpa_stolen_mask,
>> +                                             PAGE_SIZE,
>> +                                             kvm_is_write_fault(vcpu),
>> +                                             false, is_priv_gfn);
>> +
>> +               return 0;
>> +       }
>> +
>> +       if (!is_priv_gfn) {
>> +               /* Not a private mapping, handling normally */
>> +               return -EAGAIN;
>> +       }
>> +
>> +       if (kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &order))
>> +               return 1; /* Retry */
> 
> You don't need to pass a variable to hold the order if you don't need
> it. You can pass NULL.

Ah, good point - that simplifies things.

> I am also confused about the return, why do you return 1 regardless of
> the reason kvm_gmem_get_pfn() fails?

Thinking about this, I don't think we actually expect kvm_gmem_get_pfn()
to fail, so it's actually more appropriate to just pass return any error
value.

>> +       ret = kvm_mmu_topup_memory_cache(memcache,
>> +                                        kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu));
>> +       if (ret)
>> +               return ret;
> 
> If this fails you should release the page you got earlier (e.g.,
> kvm_release_pfn_clean()), or you could move it before
> kvm_gmem_get_pfn().

Good point, however...

>> +       /* FIXME: Should be able to use bigger than PAGE_SIZE mappings */
>> +       ret = realm_map_ipa(kvm, fault_ipa, pfn, PAGE_SIZE, KVM_PGTABLE_PROT_W,
>> +                            memcache);
>> +       if (!ret)
>> +               return 1; /* Handled */
> 
> Should also release the page if it fails. Speaking of which,
> where/when do you eventually release the page?

.. I messed this up ;) It seems I'm managing to leak all guestmem
pages. I'm not sure what I was thinking but I think I'd got it into my
head guestmem wasn't reference counting the pages. I'll fix this up in
the next version.

Thanks,

Steve


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ