[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230106041346.GA2288017@chaop.bj.intel.com>
Date: Fri, 6 Jan 2023 12:13:46 +0800
From: Chao Peng <chao.p.peng@...ux.intel.com>
To: Vishal Annapurve <vannapurve@...gle.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-arch@...r.kernel.org, linux-api@...r.kernel.org,
linux-doc@...r.kernel.org, qemu-devel@...gnu.org,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Arnd Bergmann <arnd@...db.de>,
Naoya Horiguchi <naoya.horiguchi@....com>,
Miaohe Lin <linmiaohe@...wei.com>, x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>, Hugh Dickins <hughd@...gle.com>,
Jeff Layton <jlayton@...nel.org>,
"J . Bruce Fields" <bfields@...ldses.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuah@...nel.org>, Mike Rapoport <rppt@...nel.org>,
Steven Price <steven.price@....com>,
"Maciej S . Szmigiero" <mail@...iej.szmigiero.name>,
Vlastimil Babka <vbabka@...e.cz>,
Yu Zhang <yu.c.zhang@...ux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
luto@...nel.org, jun.nakajima@...el.com, dave.hansen@...el.com,
ak@...ux.intel.com, david@...hat.com, aarcange@...hat.com,
ddutile@...hat.com, dhildenb@...hat.com,
Quentin Perret <qperret@...gle.com>, tabba@...gle.com,
Michael Roth <michael.roth@....com>, mhocko@...e.com,
wei.w.wang@...el.com
Subject: Re: [PATCH v10 9/9] KVM: Enable and expose KVM_MEM_PRIVATE
On Thu, Jan 05, 2023 at 12:38:30PM -0800, Vishal Annapurve wrote:
> On Thu, Dec 1, 2022 at 10:20 PM Chao Peng <chao.p.peng@...ux.intel.com> wrote:
> >
> > +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM
> > +static bool restrictedmem_range_is_valid(struct kvm_memory_slot *slot,
> > + pgoff_t start, pgoff_t end,
> > + gfn_t *gfn_start, gfn_t *gfn_end)
> > +{
> > + unsigned long base_pgoff = slot->restricted_offset >> PAGE_SHIFT;
> > +
> > + if (start > base_pgoff)
> > + *gfn_start = slot->base_gfn + start - base_pgoff;
>
> There should be a check for overflow here in case start is a very big
> value. Additional check can look like:
> if (start >= base_pgoff + slot->npages)
> return false;
>
> > + else
> > + *gfn_start = slot->base_gfn;
> > +
> > + if (end < base_pgoff + slot->npages)
> > + *gfn_end = slot->base_gfn + end - base_pgoff;
>
> If "end" is smaller than base_pgoff, this can cause overflow and
> return the range as valid. There should be additional check:
> if (end < base_pgoff)
> return false;
Thanks! Both are good catches. The improved code:
static bool restrictedmem_range_is_valid(struct kvm_memory_slot *slot,
pgoff_t start, pgoff_t end,
gfn_t *gfn_start, gfn_t *gfn_end)
{
unsigned long base_pgoff = slot->restricted_offset >> PAGE_SHIFT;
if (start >= base_pgoff + slot->npages)
return false;
else if (start <= base_pgoff)
*gfn_start = slot->base_gfn;
else
*gfn_start = start - base_pgoff + slot->base_gfn;
if (end <= base_pgoff)
return false;
else if (end >= base_pgoff + slot->npages)
*gfn_end = slot->base_gfn + slot->npages;
else
*gfn_end = end - base_pgoff + slot->base_gfn;
if (*gfn_start >= *gfn_end)
return false;
return true;
}
Thanks,
Chao
>
>
> > + else
> > + *gfn_end = slot->base_gfn + slot->npages;
> > +
> > + if (*gfn_start >= *gfn_end)
> > + return false;
> > +
> > + return true;
> > +}
> > +
Powered by blists - more mailing lists