[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <642dcb607a1a0_21a829483@dwillia2-xfh.jf.intel.com.notmuch>
Date: Wed, 5 Apr 2023 12:26:24 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Jonathan Cameron <Jonathan.Cameron@...wei.com>,
Dan Williams <dan.j.williams@...el.com>
CC: Liang Kan <kan.liang@...ux.intel.com>, <linux-cxl@...r.kernel.org>,
<peterz@...radead.org>, <mingo@...hat.com>, <acme@...nel.org>,
<mark.rutland@....com>, <will@...nel.org>, <linuxarm@...wei.com>,
<linux-perf-users@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
Davidlohr Bueso <dave@...olabs.net>,
Dave Jiang <dave.jiang@...el.com>
Subject: Re: [PATCH v4 4/5] cxl: CXL Performance Monitoring Unit driver
Jonathan Cameron wrote:
> Hi Dan,
>
> Thanks for taking a look.
>
> > > Development done with QEMU model which will be posted shortly.
> >
> > Per the comment on the cover letter is there a sense that set of
> > counters supported in QEMU and this patchset are representative of a
> > "common" set even if that is by convention and not specification?Just
> > trying to get a sense of how many real world CXL PMU instances this
> > might cover.
>
> Good question. Events wise, the driver supports all the Spec defined events
> (for the Vendor defined ones they "work" but are not described in this initial
> version so you have to know the magic numbers). In discussion with Kan I think
> we've established what discoverability will look like but it is a substantial
> addition so I was planning that for a follow up series.
>
> Features wise things I expect in hardware that don't 'yet' support include:
> 1) Implementations without freeze. They require more complex handling of
> start / stop (and will never work as well as you can't start sets of
> counters with one register right). Easy to see how to do this but
> postponed for a follow up set.
> 2) Free running counters. Chances are we'll see these for vendor defined events
> as they are a cheap way of getting support for stuff that is really debug related.
> Again, well understood how to do this but adds more complexity to the driver
> so postponed for future work.
>
> So after this series merges, I'd expect to see 3 follow-ups to add each of the
> above features, probably in the order:
>
> 1) Discoverability.
> 2) Free running counters
> 3) Implementations without freeze.
I feel like this one might be an "only if a device shows up" situation,
but the others seems reasonable to plan in advance of practical hardware
examples. I think it's ok for Linux to be prickly towards hardware
implementations that make our lives harder.
> > One thing I would have liked to see is a sample perf command line and
> > output.
>
> Sure. I'll add one here and consider improving the docs on this.
I did eventually find the Documentation, but no harm putting an example
in here too...
> > One concern I have is how usable DPA based perf events is going to be
> > for end users. Is there a concept of a synthetic event that can turn
> > data gathered from all the devices in an interleave set into HPA
> > relative data? Or do you expect that is the responsibility of the perf
> > tool to grow that support in userspace?
>
> Given we need to fuse counters from multiple devices up in userspace
> I'd expect us to do cross device aggregation in perf tool. Will be very
> system dependent on whether such aggregation makes sense (if you can't monitor
> what you need in host or at the HB level.).
Ok, makes sense.
> > > drivers/cxl/Kconfig | 13 +
> > > drivers/cxl/Makefile | 1 +
> > > drivers/cxl/cpmu.c | 940 +++++++++++++++++++++++++++++++++++++++++++
> >
> > Yeah, this is a bunch of code and ABI that is more relevant to drivers/perf/
> > than drivers/cxl/, so this wants to move to drivers/perf/cxl.c.
> For consistency with other drivers in there it will be drivers/perf/cxl_pmu.c
> with cxl_pmu.ko as module.
Sure.
> It's currently a little messy to get the includes from
> ../cxl/ but not bad enough that I think it's worth ripping those headers
> apart to move some of the content to include/linux/cxl so I'll go with
> #include "../cxl/cxlpci.h" (needed for the DVSEC ID).
> #include "../cxl/pmu.h" (with stuff drivers/cxl doesn't need pushed into the cxl_pmu.c)
> #include "../cxl/cxl.h" (For cxl_driver)
>
> Similar to done in drivers/dax/cxl.c
Works for me or:
CFLAGS_cxl_pmu.o := -I$(srctree)/drivers/cxl/
...in drivers/perf/Makefile.
[..]
> > Is it worthwhile to maintain the TODO list in the code? I just fear this
> > getting stale over time.
>
> There is already some text covering the most important of these in the patch
> description so I'll drop it from here. Patch descriptions shouldn't ever
> be stale (says the person who just deleted the garbage that was stale in
> previous patch... :(
Hey, I resemble this remark, it happens.
>
> > > +/* CXL rev 3.0 Table 13-5 Events under CXL Vendor ID */
> > > +#define CPMU_GID_CLOCK_TICKS 0x00
> > > +#define CPMU_GID_D2H_REQ 0x0010
> > > +#define CPMU_GID_D2H_RSP 0x0011
> > > +#define CPMU_GID_H2D_REQ 0x0012
> > > +#define CPMU_GID_H2D_RSP 0x0013
> > > +#define CPMU_GID_CACHE_DATA 0x0014
> > > +#define CPMU_GID_M2S_REQ 0x0020
> > > +#define CPMU_GID_M2S_RWD 0x0021
> > > +#define CPMU_GID_M2S_BIRSP 0x0022
> > > +#define CPMU_GID_S2M_BISNP 0x0023
> > > +#define CPMU_GID_S2M_NDR 0x0024
> > > +#define CPMU_GID_S2M_DRS 0x0025
> > > +#define CPMU_GID_DDR 0x8000
> >
> > Last note about the naming quibble, I notice that CPMU exists in other
> > places in the kernel, but CXL_PMU does not, so there is a small
> > grep'ability win as well.
>
> You mean unlike CXL in drivers/misc/cxl :)
Touché.
> I was going to leave these definitions alone, but fair enough I'll do these as well.
> Anyone with a narrow terminal can blame you for the line wraps ;).
It was really the devm_cxl_add_cpmu() that got me into this naming
tangent, I admit it's a personal preference to remove redundancy when
pathnames or other prefixes cover.
> > > +#define CPMU_MAX_COUNTERS 64
> > > +struct cpmu_info {
> > > + struct pmu pmu;
> > > + void __iomem *base;
> > > + struct perf_event **hw_events;
> > > + struct list_head events_configurable;
> > > + struct list_head events_fixed;
> >
> > Linked lists? Didn't xarray kill the linked list?
>
> No. Kan raised similar in an earlier review and I couldn't come
> up with a way that worked (for the configurable ones anyway)
>
> How would you index these given they are defined by a full 64
> bits made up of VID, GID and MASK + we need to match in some
> cases by exact mask (for events_fixed) and sometimes as a subset
> (for events_configurable)? I see below you say we could make
> this dependent on 64BIT to handle the fixed counters list but
> that seems ugly to me.
>
> There might be some magic data structure I'm unaware of we could
> use but why bother? Expectation is these will have only a small
> number of elements so a list is fine.
>
> Looking at it again the naming is a bit misleading. These aren't
> generally individual events but instead the "event capabilities".
> Let me rename them to make that clear. event_caps_configurable,
> even_caps_fixed + rename the struct cxl_pmu_event to cxl_pmu_ev_cap
Ah, I indeed was thinking that this was lists of events, not
capabilities.
> As such the absolute maximum length of the sum of those two lists is
> 32. Most like they'll be a lot shorter than that.
Even without the (too?) fancy hack to try to make the (vid, did, mask)
tuple the xarray lookup key, you can still replace a doubly-linked list
with an xarray indexed by the pointer value:
xa_insert(&info->events_configurable, (unsigned long)ev, ev, GFP_KERNEL);
...and then walk it with xa_for_each(). Given the small list sizes the
benefit would be more for readability in the sense that I can look at
xa_for_each() and not worry about the locking context, but not with
list_for_each_entry(). I won't push hard on this since these lists are
small and not going to grow beyond 32.
> > > + DECLARE_BITMAP(used_counter_bm, CPMU_MAX_COUNTERS);
> > > + DECLARE_BITMAP(conf_counter_bm, CPMU_MAX_COUNTERS);
> > > + u16 counter_width;
> > > + u8 num_counters;
> > > + u8 num_event_capabilities;
> > > + int on_cpu;
> > > + struct hlist_node node;
> > > + bool freeze_for_enable;
> > > + bool filter_hdm;
> > > + int irq;
> > > +};
> > > +
> > > +#define pmu_to_cpmu_info(_pmu) container_of(_pmu, struct cpmu_info, pmu)
> > > +
> > > +/*
> > > + * All CPMU counters are discoverable via the Event Capabilities Registers.
> > > + * Each Event Capability register contains a a VID / GroupID.
> > > + * A counter may then count any combination (by summing) of events in
> > > + * that group which are in the Supported Events Bitmask.
> > > + * However, there are some complexities to the scheme.
> > > + * - Fixed function counters refer to an Event Capabilities register.
> > > + * That event capability register is not then used for Configurable
> > > + * counters.
> > > + */
> > > +static int cpmu_parse_caps(struct device *dev, struct cpmu_info *info)
> > > +{
> > > + DECLARE_BITMAP(fixed_counter_event_cap_bm, 32) = {0};
> >
> > An 'unsigned long' is always at least 32-bits, and:
> >
> > unsigned long fixed_counter_event_cap_bm = 0;
> >
> > ...would not have interrupted my reading of the code.
>
> Hmm. I'm in two minds about this. Given we are accessing it with
> bitmap walking macros and hence this bitmap being replace by
> an unsigned long (unlike the bigger ones) requires a bunch of
> &fixed_counter_event_cap_bm I think it is messier to change.
>
> Don't feel that strongly though so made the change.
If there were a BITMAP() macro that did the declaration and init,
similar to LIST_HEAD() then I think I would not have reacted. It's really
the "DECLARE_BITMAP() = { 0 }" that bothered me.
[..]
> > > +static void cpmu_event_start(struct perf_event *event, int flags)
> > > +{
> > > + struct cpmu_info *info = pmu_to_cpmu_info(event->pmu);
> > > + struct hw_perf_event *hwc = &event->hw;
> > > + void __iomem *base = info->base;
> > > + u64 cfg;
> > > +
> > > + if (WARN_ON_ONCE(!(hwc->state & PERF_HES_STOPPED)))
> > > + return;
> > > +
> > > + WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE));
> >
> > Maybe a comment about why these warns will likely never fire in practice?
>
> Hmm. Good question (I cut and paste without thinking about these :)
>
> start is only either called:
> a) from cxl_pmu_event_add() where these flags are set directly
> b) from a few places in kernel/events/core.c
> __perf_event_stop() (with restart set)
> perf_adjust_period().
> there pmu->stop() is called with PERF_EF_UPDATE and we set
> the flags in our callback.
>
> perf_adjust_freq_unthr_context()
> One path calls pmu->stop() as above.
> One path perf_pmu_disable() has just been called.
> this is only path I can find that might leave these two flags not set
> but I am fairly sure we can't trigger it because (in common with most
> drivers in drivers/perf) we aren't counting interrupts as necessary to
> trigger this path.
>
> If anyone more familiar with perf can confirm / correct my logic
> it would be much appreciated.
>
> For now I'll just add a comment that we should only reach here via
> the stop call or with the state explicitly set and hence these flags should
> be set appropriately.
That works for me, maybe a check_hw_perf_event_state() helper could
centralize these common assertions in the perf core?
[..]
> > > +static int cpmu_online_cpu(unsigned int cpu, struct hlist_node *node)
> > > +{
> > > + struct cpmu_info *info = hlist_entry_safe(node, struct cpmu_info, node);
> > > +
> > > + if (info->on_cpu != -1)
> > > + return 0;
> > > +
> > > + info->on_cpu = cpu;
> > > + WARN_ON(irq_set_affinity(info->irq, cpumask_of(cpu)));
> >
>
> > Is this really a kernel crashable scenario for folks that have
> > panic_on_warn() set? If it's a "should never" happen type situation then
> > maybe a comment, otherwise pr_warn().
>
> Will see if I can come up with a comment beyond "shouldn't happen" as
> if a hotplug notifier says it's online it better still be online within
> that notifier callback.
>
> I think the key here is these are called with the cpuhp lock held and as
> such there is no race and we shoul always be able to set the irq affinity
> appropriately.
I have started to make sure all my WARN_ONs are sane after Greg pointed
out that panic_on_warn() has users.
Powered by blists - more mailing lists