[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200724105543.GV119549@hirez.programming.kicks-ass.net>
Date: Fri, 24 Jul 2020 12:55:43 +0200
From: peterz@...radead.org
To: kan.liang@...ux.intel.com
Cc: acme@...hat.com, mingo@...nel.org, linux-kernel@...r.kernel.org,
jolsa@...nel.org, eranian@...gle.com,
alexander.shishkin@...ux.intel.com, ak@...ux.intel.com,
like.xu@...ux.intel.com
Subject: Re: [PATCH V7 07/14] perf/core: Add a new PERF_EV_CAP_COEXIST event
capability
On Thu, Jul 23, 2020 at 10:11:10AM -0700, kan.liang@...ux.intel.com wrote:
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 3b22db08b6fb..93631e5389bf 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -576,9 +576,14 @@ typedef void (*perf_overflow_handler_t)(struct perf_event *,
> * PERF_EV_CAP_SOFTWARE: Is a software event.
> * PERF_EV_CAP_READ_ACTIVE_PKG: A CPU event (or cgroup event) that can be read
> * from any CPU in the package where it is active.
> + * PERF_EV_CAP_COEXIST: An event with this flag must coexist with other sibling
> + * events, which have the same flag. If any event with the flag is detached
> + * from the group, split the group into singleton events, and move the events
> + * with the flag to the unrecoverable ERROR state.
> */
> #define PERF_EV_CAP_SOFTWARE BIT(0)
> #define PERF_EV_CAP_READ_ACTIVE_PKG BIT(1)
> +#define PERF_EV_CAP_COEXIST BIT(2)
>
> #define SWEVENT_HLIST_BITS 8
> #define SWEVENT_HLIST_SIZE (1 << SWEVENT_HLIST_BITS)
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 7c436d705fbd..e35d549a356d 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2133,10 +2133,28 @@ static inline struct list_head *get_event_list(struct perf_event *event)
> return event->attr.pinned ? &ctx->pinned_active : &ctx->flexible_active;
> }
>
> +/*
> + * If the event has PERF_EV_CAP_COEXIST capability,
> + * schedule it out and move it into the ERROR state.
> + */
> +static inline void perf_remove_coexist_events(struct perf_event *event)
> +{
> + struct perf_event_context *ctx = event->ctx;
> + struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
> +
> + if (!(event->event_caps & PERF_EV_CAP_COEXIST))
> + return;
> +
> + event_sched_out(event, cpuctx, ctx);
> + perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
> +}
Ah, so the problem here is that ERROR is actually recoverable using
IOC_ENABLE. We don't want that either. Let me try and figure out of EXIT
would work.
Powered by blists - more mailing lists