[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZCG686t8tPDNPFbX@Air-de-Roger>
Date: Mon, 27 Mar 2023 17:49:07 +0200
From: Roger Pau Monné <roger.pau@...rix.com>
To: Juergen Gross <jgross@...e.com>
Cc: linux-kernel@...r.kernel.org,
Stefano Stabellini <sstabellini@...nel.org>,
xen-devel@...ts.xenproject.org,
"Rafael J. Wysocki" <rafael@...nel.org>,
Len Brown <lenb@...nel.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>,
Alex Chiang <achiang@...com>, linux-acpi@...r.kernel.org
Subject: Re: [PATCH v5] ACPI: processor: Fix evaluating _PDC method when
running as Xen dom0
On Mon, Mar 27, 2023 at 03:58:26PM +0200, Juergen Gross wrote:
> On 21.03.23 15:19, Roger Pau Monne wrote:
> > In ACPI systems, the OS can direct power management, as opposed to the
> > firmware. This OS-directed Power Management is called OSPM. Part of
> > telling the firmware that the OS going to direct power management is
> > making ACPI "_PDC" (Processor Driver Capabilities) calls. These _PDC
> > methods must be evaluated for every processor object. If these _PDC
> > calls are not completed for every processor it can lead to
> > inconsistency and later failures in things like the CPU frequency
> > driver.
> >
> > In a Xen system, the dom0 kernel is responsible for system-wide power
> > management. The dom0 kernel is in charge of OSPM. However, the
> > number of CPUs available to dom0 can be different than the number of
> > CPUs physically present on the system.
> >
> > This leads to a problem: the dom0 kernel needs to evaluate _PDC for
> > all the processors, but it can't always see them.
> >
> > In dom0 kernels, ignore the existing ACPI method for determining if a
> > processor is physically present because it might not be accurate.
> > Instead, ask the hypervisor for this information.
> >
> > Fix this by introducing a custom function to use when running as Xen
> > dom0 in order to check whether a processor object matches a CPU that's
> > online. Such checking is done using the existing information fetched
> > by the Xen pCPU subsystem, extending it to also store the ACPI ID.
> >
> > This ensures that _PDC method gets evaluated for all physically online
> > CPUs, regardless of the number of CPUs made available to dom0.
> >
> > Fixes: 5d554a7bb064 ('ACPI: processor: add internal processor_physically_present()')
> > Signed-off-by: Roger Pau Monné <roger.pau@...rix.com>
> > ---
> > Changes since v4:
> > - Move definition/declaration of xen_processor_present() to different
> > header.
> > - Fold subject edit.
> >
> > Changes since v3:
> > - Protect xen_processor_present() definition with CONFIG_ACPI.
> >
> > Changes since v2:
> > - Extend and use the existing pcpu functionality.
> >
> > Changes since v1:
> > - Reword commit message.
> > ---
> > drivers/acpi/processor_pdc.c | 11 +++++++++++
> > drivers/xen/pcpu.c | 20 ++++++++++++++++++++
> > include/xen/xen.h | 10 ++++++++++
> > 3 files changed, 41 insertions(+)
> >
> > diff --git a/drivers/acpi/processor_pdc.c b/drivers/acpi/processor_pdc.c
> > index 8c3f82c9fff3..18fb04523f93 100644
> > --- a/drivers/acpi/processor_pdc.c
> > +++ b/drivers/acpi/processor_pdc.c
> > @@ -14,6 +14,8 @@
> > #include <linux/acpi.h>
> > #include <acpi/processor.h>
> > +#include <xen/xen.h>
> > +
> > #include "internal.h"
> > static bool __init processor_physically_present(acpi_handle handle)
> > @@ -47,6 +49,15 @@ static bool __init processor_physically_present(acpi_handle handle)
> > return false;
> > }
> > + if (xen_initial_domain())
> > + /*
> > + * When running as a Xen dom0 the number of processors Linux
> > + * sees can be different from the real number of processors on
> > + * the system, and we still need to execute _PDC for all of
> > + * them.
> > + */
> > + return xen_processor_present(acpi_id);
> > +
> > type = (acpi_type == ACPI_TYPE_DEVICE) ? 1 : 0;
> > cpuid = acpi_get_cpuid(handle, type, acpi_id);
> > diff --git a/drivers/xen/pcpu.c b/drivers/xen/pcpu.c
> > index fd3a644b0855..1814f8762f54 100644
> > --- a/drivers/xen/pcpu.c
> > +++ b/drivers/xen/pcpu.c
> > @@ -58,6 +58,7 @@ struct pcpu {
> > struct list_head list;
> > struct device dev;
> > uint32_t cpu_id;
> > + uint32_t acpi_id;
> > uint32_t flags;
> > };
> > @@ -249,6 +250,7 @@ static struct pcpu *create_and_register_pcpu(struct xenpf_pcpuinfo *info)
> > INIT_LIST_HEAD(&pcpu->list);
> > pcpu->cpu_id = info->xen_cpuid;
> > + pcpu->acpi_id = info->acpi_id;
> > pcpu->flags = info->flags;
> > /* Need hold on xen_pcpu_lock before pcpu list manipulations */
> > @@ -381,3 +383,21 @@ static int __init xen_pcpu_init(void)
> > return ret;
> > }
> > arch_initcall(xen_pcpu_init);
> > +
> > +#ifdef CONFIG_ACPI
> > +bool __init xen_processor_present(uint32_t acpi_id)
> > +{
> > + struct pcpu *pcpu;
> > + bool online = false;
> > +
> > + mutex_lock(&xen_pcpu_lock);
> > + list_for_each_entry(pcpu, &xen_pcpus, list)
> > + if (pcpu->acpi_id == acpi_id) {
> > + online = pcpu->flags & XEN_PCPU_FLAGS_ONLINE;
> > + break;
> > + }
> > + mutex_unlock(&xen_pcpu_lock);
> > +
> > + return online;
> > +}
> > +#endif
> > diff --git a/include/xen/xen.h b/include/xen/xen.h
> > index 7adf59837c25..4410e74f3eb5 100644
> > --- a/include/xen/xen.h
> > +++ b/include/xen/xen.h
> > @@ -71,4 +71,14 @@ static inline void xen_free_unpopulated_pages(unsigned int nr_pages,
> > }
> > #endif
> > +#if defined(CONFIG_XEN_DOM0) && defined(CONFIG_ACPI) && defined(CONFIG_X86)
> > +bool __init xen_processor_present(uint32_t acpi_id);
> > +#else
> > +static inline bool xen_processor_present(uint32_t acpi_id)
> > +{
> > + BUG();
>
> Is this really a good idea?
>
> Arm64 supports ACPI, too, as well as XEN_DOM0. I think you either need to
> provide a stub for that case, too, or you need make this stub non-fatal
> for callers (I guess returning false is fine, as currently there are no
> hypercalls on Arm which would allow to control physical CPUs based on
> ACPI-Id).
Currently CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC is only selected for x86 and
ia64, so I assumed if we ever needed this for Arm someone would have
to write a proper handler for it for Xen.
Thanks, Roger.
Powered by blists - more mailing lists