[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5555F581.2070203@linux.vnet.ibm.com>
Date: Fri, 15 May 2015 19:02:49 +0530
From: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To: "Shreyas B. Prabhu" <shreyas@...ux.vnet.ibm.com>,
akpm@...ux-foundation.org, rostedt@...dmis.org
CC: linux-kernel@...r.kernel.org, paulmck@...ux.vnet.ibm.com,
mingo@...hat.com
Subject: Re: [PATCH linux-next] tracing/mm: Use raw_smp_processor_id() instead
of smp_processor_id()
On 05/15/2015 06:57 PM, Shreyas B. Prabhu wrote:
> trace_mm_page_pcpu_drain, trace_kmem_cache_free, trace_mm_page_free
> can be potentially called from an offlined cpu. Since trace points use
> RCU and RCU should not be used from offlined cpus, we have checks to
> filter out such calls.
>
> But these checks use smp_processor_id() and since these trace calls can
> happen from preemptable sections, this throws a warning when running
> with CONFIG_DEBUG_PREEMPT set.
>
> Now consider task gets migrated after calling smp_processor_id()
> - From an online cpu to another online cpu - No impact
> - From an online cpu to an offline cpu - Should never happen
> - From an offline cpu to an online cpu - Once a cpu has been
> offlined it returns to cpu_idle_loop, discovers its offline and calls
> arch_cpu_idle_dead. All this happens with preemption disabled. So
> this scenario too should never happen.
>
> Thus running with PREEMPT set has no impact on the condition. So use
> raw_smp_processor_id() so that the warnings are suppressed.
>
> Signed-off-by: Shreyas B. Prabhu <shreyas@...ux.vnet.ibm.com>
> ---
> include/trace/events/kmem.h | 34 +++++++++++++++++++++++++++++++---
> 1 file changed, 31 insertions(+), 3 deletions(-)
>
> diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
> index 6cd975f..f7554fd 100644
> --- a/include/trace/events/kmem.h
> +++ b/include/trace/events/kmem.h
> @@ -146,7 +146,16 @@ DEFINE_EVENT_CONDITION(kmem_free, kmem_cache_free,
>
> TP_ARGS(call_site, ptr),
>
> - TP_CONDITION(cpu_online(smp_processor_id()))
> + /*
> + * This trace can be potentially called from an offlined cpu.
> + * Since trace points use RCU and RCU should not be used from
> + * offline cpus, filter such calls out.
> + * While this trace can be called from a preemptable section,
> + * it has no impact on the condition since tasks can migrate
> + * only from online cpus to other online cpus. Thus its safe
> + * to use raw_smp_processor_id.
> + */
> + TP_CONDITION(cpu_online(raw_smp_processor_id()))
> );
>
> TRACE_EVENT_CONDITION(mm_page_free,
> @@ -155,7 +164,17 @@ TRACE_EVENT_CONDITION(mm_page_free,
>
> TP_ARGS(page, order),
>
> - TP_CONDITION(cpu_online(smp_processor_id())),
> +
> + /*
> + * This trace can be potentially called from an offlined cpu.
> + * Since trace points use RCU and RCU should not be used from
> + * offline cpus, filter such calls out.
> + * While this trace can be called from a preemptable section,
> + * it has no impact on the condition since tasks can migrate
> + * only from online cpus to other online cpus. Thus its safe
> + * to use raw_smp_processor_id.
> + */
> + TP_CONDITION(cpu_online(raw_smp_processor_id())),
>
> TP_STRUCT__entry(
> __field( unsigned long, pfn )
> @@ -263,7 +282,16 @@ TRACE_EVENT_CONDITION(mm_page_pcpu_drain,
>
> TP_ARGS(page, order, migratetype),
>
> - TP_CONDITION(cpu_online(smp_processor_id())),
> + /*
> + * This trace can be potentially called from an offlined cpu.
> + * Since trace points use RCU and RCU should not be used from
> + * offline cpus, filter such calls out.
> + * While this trace can be called from a preemptable section,
> + * it has no impact on the condition since tasks can migrate
> + * only from online cpus to other online cpus. Thus its safe
> + * to use raw_smp_processor_id.
> + */
> + TP_CONDITION(cpu_online(raw_smp_processor_id())),
>
> TP_STRUCT__entry(
> __field( unsigned long, pfn )
Reviewed-by: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists