[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAn5OlQiBKNw0rH8@yzhao56-desk.sh.intel.com>
Date: Thu, 24 Apr 2025 16:41:30 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
CC: <pbonzini@...hat.com>, <seanjc@...gle.com>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>, <x86@...nel.org>,
<rick.p.edgecombe@...el.com>, <dave.hansen@...el.com>,
<kirill.shutemov@...el.com>, <tabba@...gle.com>, <ackerleytng@...gle.com>,
<quic_eberman@...cinc.com>, <michael.roth@....com>, <david@...hat.com>,
<vannapurve@...gle.com>, <vbabka@...e.cz>, <jroedel@...e.de>,
<thomas.lendacky@....com>, <pgonda@...gle.com>, <zhiquan1.li@...el.com>,
<fan.du@...el.com>, <jun.miao@...el.com>, <ira.weiny@...el.com>,
<isaku.yamahata@...el.com>, <xiaoyao.li@...el.com>,
<binbin.wu@...ux.intel.com>, <chao.p.peng@...el.com>
Subject: Re: [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to
support huge pages
On Thu, Apr 24, 2025 at 10:48:53AM +0300, Kirill A. Shutemov wrote:
> On Thu, Apr 24, 2025 at 11:04:28AM +0800, Yan Zhao wrote:
> > Enhance the SEAMCALL wrapper tdh_mem_page_aug() to support huge pages.
> >
> > Verify the validity of the level and ensure that the mapping range is fully
> > contained within the page folio.
> >
> > As a conservative solution, perform CLFLUSH on all pages to be mapped into
> > the TD before invoking the SEAMCALL TDH_MEM_PAGE_AUG. This ensures that any
> > dirty cache lines do not write back later and clobber TD memory.
> >
> > Signed-off-by: Xiaoyao Li <xiaoyao.li@...el.com>
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
> > Signed-off-by: Yan Zhao <yan.y.zhao@...el.com>
> > ---
> > arch/x86/virt/vmx/tdx/tdx.c | 11 ++++++++++-
> > 1 file changed, 10 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> > index f5e2a937c1e7..a66d501b5677 100644
> > --- a/arch/x86/virt/vmx/tdx/tdx.c
> > +++ b/arch/x86/virt/vmx/tdx/tdx.c
> > @@ -1595,9 +1595,18 @@ u64 tdh_mem_page_aug(struct tdx_td *td, u64 gpa, int level, struct page *page, u
> > .rdx = tdx_tdr_pa(td),
> > .r8 = page_to_phys(page),
> > };
> > + unsigned long nr_pages = 1 << (level * 9);
>
> PTE_SHIFT.
Yes. Thanks.
> > + struct folio *folio = page_folio(page);
> > + unsigned long idx = 0;
> > u64 ret;
> >
> > - tdx_clflush_page(page);
> > + if (!(level >= TDX_PS_4K && level < TDX_PS_NR) ||
>
> Do we even need this check?
Maybe not if tdh_mem_page_aug() trusts KVM :)
The consideration is to avoid nr_pages being too huge to cause too many
tdx_clflush_page()s on any reckless error.
> > + (folio_page_idx(folio, page) + nr_pages > folio_nr_pages(folio)))
> > + return -EINVAL;
> > +
> > + while (nr_pages--)
> > + tdx_clflush_page(nth_page(page, idx++));
> > +
> > ret = seamcall_ret(TDH_MEM_PAGE_AUG, &args);
> >
> > *ext_err1 = args.rcx;
> > --
> > 2.43.2
> >
>
> --
> Kiryl Shutsemau / Kirill A. Shutemov
Powered by blists - more mailing lists