[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130704124536.GK23916@twins.programming.kicks-ass.net>
Date: Thu, 4 Jul 2013 14:45:36 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Yan, Zheng" <zheng.z.yan@...el.com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org, eranian@...gle.com,
andi@...stfloor.org
Subject: Re: [PATCH v2 4/7] perf, x86: Save/resotre LBR stack during context
switch
On Mon, Jul 01, 2013 at 03:23:04PM +0800, Yan, Zheng wrote:
> @@ -2488,25 +2508,31 @@ static void perf_branch_stack_sched_in(struct task_struct *prev,
>
> list_for_each_entry_rcu(pmu, &pmus, entry) {
> cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
> + task_ctx = cpuctx->task_ctx;
>
> /*
> - * check if the context has at least one
> - * event using PERF_SAMPLE_BRANCH_STACK
> + * force flush the branch stack if there are cpu-wide events
> + * using PERF_SAMPLE_BRANCH_STACK
> + *
> + * save/restore the branch stack if the task context has
> + * at least one event using PERF_SAMPLE_BRANCH_STACK
> */
> - if (cpuctx->ctx.nr_branch_stack > 0
> - && pmu->flush_branch_stack) {
> -
> + bool force_flush = cpuctx->ctx.nr_branch_stack > 0;
> + if (pmu->branch_stack_sched &&
> + (force_flush ||
> + (task_ctx && task_ctx->nr_branch_stack > 0))) {
> pmu = cpuctx->ctx.pmu;
>
> - perf_ctx_lock(cpuctx, cpuctx->task_ctx);
> + perf_ctx_lock(cpuctx, task_ctx);
>
> perf_pmu_disable(pmu);
>
> - pmu->flush_branch_stack();
> + pmu->branch_stack_sched(task_ctx,
> + sched_in, force_flush);
>
> perf_pmu_enable(pmu);
>
> - perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
> + perf_ctx_unlock(cpuctx, task_ctx);
> }
> }
>
I never really like this; and yes I know I wrote part of that. Is there
any way we can get rid of this and to it 'properly' through the events
that get scheduled?
After all; the LBR usage is through the events, so scheduling the events
should also manage the LBR state.
What is missing for that to work?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists