[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210126180948.mytpoenruhx4g43u@liuwe-devbox-debian-v2>
Date: Tue, 26 Jan 2021 18:09:48 +0000
From: Wei Liu <wei.liu@...nel.org>
To: Michael Kelley <mikelley@...rosoft.com>
Cc: Wei Liu <wei.liu@...nel.org>,
Linux on Hyper-V List <linux-hyperv@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
Linux Kernel List <linux-kernel@...r.kernel.org>,
Vineeth Pillai <viremana@...ux.microsoft.com>,
Sunil Muthuswamy <sunilmut@...rosoft.com>,
Nuno Das Neves <nunodasneves@...ux.microsoft.com>,
"pasha.tatashin@...een.com" <pasha.tatashin@...een.com>,
Lillian Grassin-Drake <Lillian.GrassinDrake@...rosoft.com>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH v5 06/16] x86/hyperv: allocate output arg pages if
required
On Tue, Jan 26, 2021 at 12:41:05AM +0000, Michael Kelley wrote:
> From: Wei Liu <wei.liu@...nel.org> Sent: Wednesday, January 20, 2021 4:01 AM
> >
> > When Linux runs as the root partition, it will need to make hypercalls
> > which return data from the hypervisor.
> >
> > Allocate pages for storing results when Linux runs as the root
> > partition.
> >
> > Signed-off-by: Lillian Grassin-Drake <ligrassi@...rosoft.com>
> > Co-Developed-by: Lillian Grassin-Drake <ligrassi@...rosoft.com>
> > Signed-off-by: Wei Liu <wei.liu@...nel.org>
> > ---
> > v3: Fix hv_cpu_die to use free_pages.
> > v2: Address Vitaly's comments
> > ---
> > arch/x86/hyperv/hv_init.c | 35 ++++++++++++++++++++++++++++-----
> > arch/x86/include/asm/mshyperv.h | 1 +
> > 2 files changed, 31 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
> > index e04d90af4c27..6f4cb40e53fe 100644
> > --- a/arch/x86/hyperv/hv_init.c
> > +++ b/arch/x86/hyperv/hv_init.c
> > @@ -41,6 +41,9 @@ EXPORT_SYMBOL_GPL(hv_vp_assist_page);
> > void __percpu **hyperv_pcpu_input_arg;
> > EXPORT_SYMBOL_GPL(hyperv_pcpu_input_arg);
> >
> > +void __percpu **hyperv_pcpu_output_arg;
> > +EXPORT_SYMBOL_GPL(hyperv_pcpu_output_arg);
> > +
> > u32 hv_max_vp_index;
> > EXPORT_SYMBOL_GPL(hv_max_vp_index);
> >
> > @@ -73,12 +76,19 @@ static int hv_cpu_init(unsigned int cpu)
> > void **input_arg;
> > struct page *pg;
> >
> > - input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
> > /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */
> > - pg = alloc_page(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL);
> > + pg = alloc_pages(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL, hv_root_partition ?
> > 1 : 0);
> > if (unlikely(!pg))
> > return -ENOMEM;
> > +
> > + input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
> > *input_arg = page_address(pg);
> > + if (hv_root_partition) {
> > + void **output_arg;
> > +
> > + output_arg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg);
> > + *output_arg = page_address(pg + 1);
> > + }
> >
> > hv_get_vp_index(msr_vp_index);
> >
> > @@ -205,14 +215,23 @@ static int hv_cpu_die(unsigned int cpu)
> > unsigned int new_cpu;
> > unsigned long flags;
> > void **input_arg;
> > - void *input_pg = NULL;
> > + void *pg;
> >
> > local_irq_save(flags);
> > input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
> > - input_pg = *input_arg;
> > + pg = *input_arg;
> > *input_arg = NULL;
> > +
> > + if (hv_root_partition) {
> > + void **output_arg;
> > +
> > + output_arg = (void **)this_cpu_ptr(hyperv_pcpu_output_arg);
> > + *output_arg = NULL;
> > + }
> > +
> > local_irq_restore(flags);
> > - free_page((unsigned long)input_pg);
> > +
> > + free_pages((unsigned long)pg, hv_root_partition ? 1 : 0);
> >
> > if (hv_vp_assist_page && hv_vp_assist_page[cpu])
> > wrmsrl(HV_X64_MSR_VP_ASSIST_PAGE, 0);
> > @@ -346,6 +365,12 @@ void __init hyperv_init(void)
> >
> > BUG_ON(hyperv_pcpu_input_arg == NULL);
> >
> > + /* Allocate the per-CPU state for output arg for root */
> > + if (hv_root_partition) {
> > + hyperv_pcpu_output_arg = alloc_percpu(void *);
> > + BUG_ON(hyperv_pcpu_output_arg == NULL);
> > + }
> > +
> > /* Allocate percpu VP index */
> > hv_vp_index = kmalloc_array(num_possible_cpus(), sizeof(*hv_vp_index),
> > GFP_KERNEL);
> > diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
> > index ac2b0d110f03..62d9390f1ddf 100644
> > --- a/arch/x86/include/asm/mshyperv.h
> > +++ b/arch/x86/include/asm/mshyperv.h
> > @@ -76,6 +76,7 @@ static inline void hv_disable_stimer0_percpu_irq(int irq) {}
> > #if IS_ENABLED(CONFIG_HYPERV)
> > extern void *hv_hypercall_pg;
> > extern void __percpu **hyperv_pcpu_input_arg;
> > +extern void __percpu **hyperv_pcpu_output_arg;
> >
> > static inline u64 hv_do_hypercall(u64 control, void *input, void *output)
> > {
> > --
> > 2.20.1
>
> I think this all works OK. But a meta question: Do we need a separate
> per-cpu output argument page? From the Hyper-V hypercall standpoint, I
> don't think input and output args need to be in separate pages. They both
That's correct. They don't have to be in separate pages.
> just need to not cross a page boundary. As long as we don't have a hypercall
> where the sum of the sizes of the input and output args exceeds a page,
> we could just have a single page, and split it up in any manner that works
> for the particular hypercall.
>
There is one more requirement: The pointers must be 8-byte aligned. That
means we may need to explicitly pad things a bit. That quickly becomes
tedious if we do it in every call site; or we will need to provide a
macro to do the calculation correctly.
Another consideration is hypercalls that take variable-length input /
output. Admittedly I haven't seen one that takes variable-length
arguments and needs to do input and output at the same time, I wouldn't
want to paint ourselves into the corner now because sizing
variable-length input and output at the same time can be non-trivial.
Wei.
> Thoughts?
>
> Michael
>
>
Powered by blists - more mailing lists