[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <457896b2-b462-639e-bb40-dee3716fcb9a@linux.vnet.ibm.com>
Date: Thu, 28 Oct 2021 16:48:28 +0200
From: Janis Schoetterl-Glausch <scgl@...ux.vnet.ibm.com>
To: David Hildenbrand <david@...hat.com>,
Janis Schoetterl-Glausch <scgl@...ux.ibm.com>,
Christian Borntraeger <borntraeger@...ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>
Cc: Claudio Imbrenda <imbrenda@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
kvm@...r.kernel.org, linux-s390@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] KVM: s390: gaccess: Cleanup access to guest frames
On 10/28/21 16:25, David Hildenbrand wrote:
> On 28.10.21 15:55, Janis Schoetterl-Glausch wrote:
>> Introduce a helper function for guest frame access.
>
> "guest page access"
Ok.
>
> But I do wonder if you actually want to call it
>
> "access_guest_abs"
>
> and say "guest absolute access" instead here.
>
> Because we're dealing with absolute addresses and the fact that we are
> accessing it page-wise is just because we have to perform a page-wise
> translation in the callers (either virtual->absolute or real->absolute).
>
> Theoretically, if you know you're across X pages but they are contiguous
> in absolute address space, nothing speaks against using that function
> directly across X pages with a single call.
There currently is no point to this, is there?
kvm_read/write_guest break the region up into pages anyway,
so no reason to try to identify larger continuous chunks.
>
>>
>> Signed-off-by: Janis Schoetterl-Glausch <scgl@...ux.ibm.com>
>> ---
>> arch/s390/kvm/gaccess.c | 24 ++++++++++++++++--------
>> 1 file changed, 16 insertions(+), 8 deletions(-)
>>
>> diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
>> index f0848c37b003..9a633310b6fe 100644
>> --- a/arch/s390/kvm/gaccess.c
>> +++ b/arch/s390/kvm/gaccess.c
>> @@ -866,6 +866,20 @@ static int guest_range_to_gpas(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar,
>> return 0;
>> }
>>
>> +static int access_guest_page(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa,
>> + void *data, unsigned int len)
>> +{
>> + const unsigned int offset = offset_in_page(gpa);
>> + const gfn_t gfn = gpa_to_gfn(gpa);
>> + int rc;
>> +
>> + if (mode == GACC_STORE)
>> + rc = kvm_write_guest_page(kvm, gfn, data, offset, len);
>> + else
>> + rc = kvm_read_guest_page(kvm, gfn, data, offset, len);
>> + return rc;
>> +}
>> +
>> int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
>> unsigned long len, enum gacc_mode mode)
>> {
>> @@ -896,10 +910,7 @@ int access_guest(struct kvm_vcpu *vcpu, unsigned long ga, u8 ar, void *data,
>> rc = guest_range_to_gpas(vcpu, ga, ar, gpas, len, asce, mode);
>> for (idx = 0; idx < nr_pages && !rc; idx++) {
>> fragment_len = min(PAGE_SIZE - offset_in_page(gpas[idx]), len);
>> - if (mode == GACC_STORE)
>> - rc = kvm_write_guest(vcpu->kvm, gpas[idx], data, fragment_len);
>> - else
>> - rc = kvm_read_guest(vcpu->kvm, gpas[idx], data, fragment_len);
>> + rc = access_guest_page(vcpu->kvm, mode, gpas[idx], data, fragment_len);
>> len -= fragment_len;
>> data += fragment_len;
>> }
>> @@ -920,10 +931,7 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra,
>> while (len && !rc) {
>> gpa = kvm_s390_real_to_abs(vcpu, gra);
>> fragment_len = min(PAGE_SIZE - offset_in_page(gpa), len);
>> - if (mode)
>> - rc = write_guest_abs(vcpu, gpa, data, fragment_len);
>> - else
>> - rc = read_guest_abs(vcpu, gpa, data, fragment_len);
>> + rc = access_guest_page(vcpu->kvm, mode, gpa, data, fragment_len);
>> len -= fragment_len;
>> gra += fragment_len;
>> data += fragment_len;
>>
>
>
Powered by blists - more mailing lists