lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0iaf8Own=NuHZuYfVpKPBM-ZYHf-SYEgwsjREg-XfuNbA@mail.gmail.com>
Date: Fri, 26 Apr 2024 19:55:40 +0200
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Zhang Rui <rui.zhang@...el.com>
Cc: rafael.j.wysocki@...el.com, linux-kernel@...r.kernel.org, 
	linux-pm@...r.kernel.org, srinivas.pandruvada@...el.com
Subject: Re: [PATCH V3 1/2] powercap: intel_rapl: Introduce APIs for PMU support

On Mon, Apr 22, 2024 at 6:21 PM Zhang Rui <rui.zhang@...el.com> wrote:
>
> Introduce two new APIs rapl_package_add_pmu()/rapl_package_remove_pmu().
>
> RAPL driver can invoke these APIs to expose its supported energy
> counters via perf PMU. The new RAPL PMU is fully compatible with current
> MSR RAPL PMU, including using the same PMU name and events
> name/id/unit/scale, etc.
>
> For example, use below command
>  perf stat -e power/energy-pkg/ -e power/energy-ram/ FOO
> to get the energy consumption if power/energy-pkg/ and power/energy-ram/
> events are available in the "perf list" output.
>
> This does not introduce any conflict because TPMI RAPL is the only user
> of these APIs currently, and it never co-exists with MSR RAPL.
>
> Note that RAPL Packages can be probed/removed dynamically, and the
> events supported by each TPMI RAPL device can be different. Thus the
> RAPL PMU support is done on demand, which means
> 1. PMU is registered only if it is needed by a RAPL Package. PMU events
>    for unsupported counters are not exposed.
> 2. PMU is unregistered and registered when a new RAPL Package is probed
>    and supports new counters that are not supported by current PMU.
>    For example, on a dual-package system using TPMI RAPL, it is possible
>    that Package 1 behaves as TPMI domain root and supports Psys domain.
>    In this case, register PMU without Psys event when probing Package 0,
>    and re-register the PMU with Psys event when probing Package 1.
> 3. PMU is unregistered when all registered RAPL Packages don't need PMU.
>
> Signed-off-by: Zhang Rui <rui.zhang@...el.com>
> ---
>  drivers/powercap/intel_rapl_common.c | 578 +++++++++++++++++++++++++++
>  include/linux/intel_rapl.h           |  32 ++
>  2 files changed, 610 insertions(+)
>
> diff --git a/drivers/powercap/intel_rapl_common.c b/drivers/powercap/intel_rapl_common.c
> index c4302caeb631..1fa45ed8ba0b 100644
> --- a/drivers/powercap/intel_rapl_common.c
> +++ b/drivers/powercap/intel_rapl_common.c
> @@ -15,6 +15,8 @@
>  #include <linux/list.h>
>  #include <linux/log2.h>
>  #include <linux/module.h>
> +#include <linux/nospec.h>
> +#include <linux/perf_event.h>
>  #include <linux/platform_device.h>
>  #include <linux/powercap.h>
>  #include <linux/processor.h>
> @@ -1507,6 +1509,582 @@ static int rapl_detect_domains(struct rapl_package *rp)
>         return 0;
>  }
>
> +#ifdef CONFIG_PERF_EVENTS
> +
> +/*
> + * Support for RAPL PMU
> + *
> + * Register a PMU if any of the registered RAPL Packages have the requirement
> + * of exposing its energy counters via Perf PMU.
> + *
> + * PMU Name:
> + *     power
> + *
> + * Events:
> + *     Name            Event id        RAPL Domain
> + *     energy_cores    0x01            RAPL_DOMAIN_PP0
> + *     energy_pkg      0x02            RAPL_DOMAIN_PACKAGE
> + *     energy_ram      0x03            RAPL_DOMAIN_DRAM
> + *     energy_gpu      0x04            RAPL_DOMAIN_PP1
> + *     energy_psys     0x05            RAPL_DOMAIN_PLATFORM
> + *
> + * Unit:
> + *     Joules
> + *
> + * Scale:
> + *     2.3283064365386962890625e-10
> + *     The same RAPL domain in different RAPL Packages may have different
> + *     energy units. Use 2.3283064365386962890625e-10 (2^-32) Joules as
> + *     the fixed unit for all energy counters, and covert each hardware
> + *     counter increase to N times of PMU event counter increases.
> + *
> + * This is fully compatible with the current MSR RAPL PMU. This means that
> + * userspace programs like turbostat can use the same code to handle RAPL Perf
> + * PMU, no matter what RAPL Interface driver (MSR/TPMI, etc) is running
> + * underlying on the platform.
> + *
> + * Note that RAPL Packages can be probed/removed dynamically, and the events
> + * supported by each TPMI RAPL device can be different. Thus the RAPL PMU
> + * support is done on demand, which means
> + * 1. PMU is registered only if it is needed by a RAPL Package. PMU events for
> + *    unsupported counters are not exposed.
> + * 2. PMU is unregistered and registered when a new RAPL Package is probed and
> + *    supports new counters that are not supported by current PMU.
> + * 3. PMU is unregistered when all registered RAPL Packages don't need PMU.
> + */
> +
> +struct rapl_pmu {
> +       struct pmu pmu;                 /* Perf PMU structure */
> +       u64 timer_ms;                   /* Maximum expiration time to avoid counter overflow */
> +       unsigned long domain_map;       /* Events supported by current registered PMU */
> +       bool registered;                /* Whether the PMU has been registered or not */
> +};
> +
> +static struct rapl_pmu rapl_pmu;
> +
> +/* PMU helpers */
> +
> +static int get_pmu_cpu(struct rapl_package *rp)
> +{
> +       int cpu;
> +
> +       if (!rp->has_pmu)
> +               return nr_cpu_ids;
> +
> +       /* Only TPMI RAPL is supported for now */
> +       if (rp->priv->type != RAPL_IF_TPMI)
> +               return nr_cpu_ids;
> +
> +       /* TPMI RAPL uses any CPU in the package for PMU */
> +       for_each_online_cpu(cpu)
> +               if (topology_physical_package_id(cpu) == rp->id)
> +                       return cpu;
> +
> +       return nr_cpu_ids;
> +}
> +
> +static bool is_rp_pmu_cpu(struct rapl_package *rp, int cpu)
> +{
> +       if (!rp->has_pmu)
> +               return false;
> +
> +       /* Only TPMI RAPL is supported for now */
> +       if (rp->priv->type != RAPL_IF_TPMI)
> +               return nr_cpu_ids;

As per the comment, this should be false, shouldn't it?

> +
> +       /* TPMI RAPL uses any CPU in the package for PMU */
> +       return topology_physical_package_id(cpu) == rp->id;
> +}
> +
> +static struct rapl_package_pmu_data *event_to_pmu_data(struct perf_event *event)
> +{
> +       struct rapl_package *rp = event->pmu_private;
> +
> +       return &rp->pmu_data;
> +}
> +
> +/* PMU event callbacks */
> +
> +static u64 event_read_counter(struct perf_event *event)
> +{
> +       struct rapl_package *rp = event->pmu_private;
> +       u64 val;
> +       int ret;
> +
> +       /* Return 0 for unsupported events */
> +       if (event->hw.idx < 0)
> +               return 0;
> +
> +       ret = rapl_read_data_raw(&rp->domains[event->hw.idx], ENERGY_COUNTER, false, &val);
> +
> +       /* Return 0 for failed read */
> +       if (ret)
> +               return 0;
> +
> +       return val;
> +}
> +
> +static void __rapl_pmu_event_start(struct perf_event *event)
> +{
> +       struct rapl_package_pmu_data *data = event_to_pmu_data(event);
> +
> +       if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
> +               return;
> +
> +       event->hw.state = 0;
> +
> +       list_add_tail(&event->active_entry, &data->active_list);
> +
> +       local64_set(&event->hw.prev_count, event_read_counter(event));
> +       if (++data->n_active == 1)
> +               hrtimer_start(&data->hrtimer, data->timer_interval,
> +                             HRTIMER_MODE_REL_PINNED);
> +}
> +
> +static void rapl_pmu_event_start(struct perf_event *event, int mode)
> +{
> +       struct rapl_package_pmu_data *data = event_to_pmu_data(event);
> +       unsigned long flags;
> +
> +       raw_spin_lock_irqsave(&data->lock, flags);
> +       __rapl_pmu_event_start(event);
> +       raw_spin_unlock_irqrestore(&data->lock, flags);
> +}
> +
> +static u64 rapl_event_update(struct perf_event *event)
> +{
> +       struct hw_perf_event *hwc = &event->hw;
> +       struct rapl_package_pmu_data *data = event_to_pmu_data(event);
> +       u64 prev_raw_count, new_raw_count;
> +       s64 delta, sdelta;
> +       s64 tmp;
> +
> +       do {
> +               prev_raw_count = local64_read(&hwc->prev_count);
> +               new_raw_count = event_read_counter(event);
> +               tmp = local64_cmpxchg(&hwc->prev_count, prev_raw_count, new_raw_count);
> +       } while (tmp != prev_raw_count);

I think that it is only safe to call this function for draining an
event going away, because otherwise the above may turn into an endless
loop, and the function is called under a spinlock.

I would add a comment (above the loop) explaining that this is about
draining, so the counter is expected to stop incrementing shortly.

The rest of the patch LGTM.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ