lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABPqkBQXfyiTWN0FjZPfRokMTnp=+WdjkjT4OGk52MPf4vqKLA@mail.gmail.com>
Date:	Mon, 11 Jan 2016 11:43:40 -0800
From:	Stephane Eranian <eranian@...gle.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...nel.org>,
	Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Vince Weaver <vince@...ter.net>,
	Dmitry Vyukov <dvyukov@...gle.com>,
	Andi Kleen <andi@...stfloor.org>, Jiri Olsa <jolsa@...hat.com>
Subject: Re: [RFC][PATCH 02/12] perf: Fix cgroup event scheduling

Peter,

On Mon, Jan 11, 2016 at 8:25 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> There appears to be a problemin __perf_event_task_sched_in() wrt
> cgroup event scheduling.
>
> The normal event scheduling order is:
>
>         CPU pinned
>         Task pinned
>         CPU flexible
>         Task flexible
>
> And since perf_cgroup_sched*() only schedules the cpu context, we must
> call this _before_ adding the task events.
>
I understand this but I am trying to understand why cgroup system-wide event
would be treated differently from regular system-wide events w.r.t. task events
here. If I do a cgroup flexible and a task pinned, what happens?

> Note: double check what happens on the ctx switch optimization where
> the task ctx isn't scheduled.
>
> Cc: Stephane Eranian <eranian@...gle.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  kernel/events/core.c |   17 ++++++++++-------
>  1 file changed, 10 insertions(+), 7 deletions(-)
>
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2804,6 +2804,16 @@ void __perf_event_task_sched_in(struct t
>         struct perf_event_context *ctx;
>         int ctxn;
>
> +       /*
> +        * If cgroup events exist on this CPU, then we need to check if we have
> +        * to switch in PMU state; cgroup event are system-wide mode only.
> +        *
> +        * Since cgroup events are CPU events, we must schedule these in before
> +        * we schedule in the task events.
> +        */
> +       if (atomic_read(this_cpu_ptr(&perf_cgroup_events)))
> +               perf_cgroup_sched_in(prev, task);
> +
>         for_each_task_context_nr(ctxn) {
>                 ctx = task->perf_event_ctxp[ctxn];
>                 if (likely(!ctx))
> @@ -2811,13 +2821,6 @@ void __perf_event_task_sched_in(struct t
>
>                 perf_event_context_sched_in(ctx, task);
>         }
> -       /*
> -        * if cgroup events exist on this CPU, then we need
> -        * to check if we have to switch in PMU state.
> -        * cgroup event are system-wide mode only
> -        */
> -       if (atomic_read(this_cpu_ptr(&perf_cgroup_events)))
> -               perf_cgroup_sched_in(prev, task);
>
>         if (atomic_read(&nr_switch_events))
>                 perf_event_switch(task, prev, true);
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ