[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMkAt6qh3Wbpe1-8cmm3kbGY5CSjG0Uccw5azyOMd349hajehQ@mail.gmail.com>
Date: Mon, 9 Jan 2023 14:19:26 -0700
From: Peter Gonda <pgonda@...gle.com>
To: Ackerley Tng <ackerleytng@...gle.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
marcorr@...gle.com, seanjc@...gle.com, michael.roth@....com,
thomas.lendacky@....com, joro@...tes.org, mizhang@...gle.com,
pbonzini@...hat.com, andrew.jones@...ux.dev, vannapurve@...gle.com
Subject: Re: [PATCH V5 5/7] KVM: selftests: add library for
creating/interacting with SEV guests
On Wed, Dec 21, 2022 at 2:13 PM Ackerley Tng <ackerleytng@...gle.com> wrote:
>
>
> > +static void encrypt_region(struct kvm_vm *vm, struct
> > userspace_mem_region *region)
> > +{
> > + const struct sparsebit *protected_phy_pages =
> > + region->protected_phy_pages;
> > + const uint64_t memory_size = region->region.memory_size;
> > + const vm_paddr_t gpa_start = region->region.guest_phys_addr;
> > + sparsebit_idx_t pg = 0;
> > +
> > + sev_register_user_region(vm, region);
> > +
> > + while (pg < (memory_size / vm->page_size)) {
> > + sparsebit_idx_t nr_pages;
> > +
> > + if (sparsebit_is_clear(protected_phy_pages, pg)) {
> > + pg = sparsebit_next_set(protected_phy_pages, pg);
> > + if (!pg)
> > + break;
> > + }
> > +
> > + nr_pages = sparsebit_next_clear(protected_phy_pages, pg) - pg;
> > + if (nr_pages <= 0)
> > + nr_pages = 1;
>
> I think this may not be correct in the case where the sparsebit has the
> range [x, 2**64-1] (inclusive) set. In that case, sparsebit_next_clear()
> will return 0, but the number of pages could be more than 1.
>
> > +
> > + sev_launch_update_data(vm, gpa_start + pg * vm->page_size,
>
> Computing the beginning of the gpa range with
>
> gpa_start + pg * vm->page_size
>
> only works if this memory region's gpa_start is 0.
>
> > + nr_pages * vm->page_size);
> > + pg += nr_pages;
> > + }
> > +}
>
> Here's a suggestion (I'm using this on a TDX version of this patch)
Thanks for this catch and the code. I've pulled this into the V6 I am preparing.
>
>
> /**
> * Iterate over set ranges within sparsebit @s. In each iteration,
> * @range_begin and @range_end will take the beginning and end of the set
> range,
> * which are of type sparsebit_idx_t.
> *
> * For example, if the range [3, 7] (inclusive) is set, within the
> iteration,
> * @range_begin will take the value 3 and @range_end will take the value 7.
> *
> * Ensure that there is at least one bit set before using this macro with
> * sparsebit_any_set(), because sparsebit_first_set() will abort if none are
> * set.
> */
> #define sparsebit_for_each_set_range(s, range_begin, range_end) \
> for (range_begin = sparsebit_first_set(s), \
> range_end = \
> sparsebit_next_clear(s, range_begin) - 1; \
> range_begin && range_end; \
> range_begin = sparsebit_next_set(s, range_end), \
> range_end = \
> sparsebit_next_clear(s, range_begin) - 1)
> /*
> * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the
> -1
> * would then cause an underflow back to 2**64 - 1. This is expected and
> * correct.
> *
> * If the last range in the sparsebit is [x, y] and we try to iterate,
> * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try
> and
> * find the first range, but that's correct because the condition expression
> * would cause us to quit the loop.
> */
>
>
> static void encrypt_region(struct kvm_vm *vm, struct userspace_mem_region
> *region)
> {
> const struct sparsebit *protected_phy_pages =
> region->protected_phy_pages;
> const vm_paddr_t gpa_base = region->region.guest_phys_addr;
> const sparsebit_idx_t lowest_page_in_region = gpa_base >> vm->page_shift;
>
> sparsebit_idx_t i;
> sparsebit_idx_t j;
>
> if (!sparsebit_any_set(protected_phy_pages))
> return;
>
> sev_register_user_region(vm, region);
>
> sparsebit_for_each_set_range(protected_phy_pages, i, j) {
> const uint64_t size_to_load = (j - i + 1) * vm->page_size;
> const uint64_t offset = (i - lowest_page_in_region) * vm->page_size;
> const uint64_t gpa = gpa_base + offset;
>
> sev_launch_update_data(vm, gpa, size_to_load);
> }
> }
Powered by blists - more mailing lists