[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bf15209d-2c50-9957-af24-c4f428f213b1@redhat.com>
Date: Thu, 14 Apr 2022 16:14:22 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>,
Peter Xu <peterx@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Ben Gardon <bgardon@...gle.com>,
David Matlack <dmatlack@...gle.com>,
Andrew Jones <drjones@...hat.com>
Subject: Re: [PATCH] kvm: selftests: Fix cut-off of addr_gva2gpa lookup
On 4/14/22 15:56, Sean Christopherson wrote:
>> - return (pte[index[0]].pfn * vm->page_size) + (gva & 0xfffu);
>> + return ((vm_paddr_t)pte[index[0]].pfn * vm->page_size) + (gva & 0xfffu);
> This is but one of many paths that can get burned by pfn being 40 bits. The
> most backport friendly fix is probably to add a pfn=>gpa helper and use that to
> place the myriad "pfn * vm->page_size" instances.
>
> For a true long term solution, my vote is to do away with the bit field struct
> and use #define'd masks and whatnot.
Yes, bitfields larger than 32 bits are a mess.
Paolo
Powered by blists - more mailing lists