lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 5 May 2010 18:57:34 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Cyrill Gorcunov <gorcunov@...nvz.org>
Cc:	Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH -tip] x86,perf: P4 PMU -- protect sensible procedures
	from preemption

On Wed, May 05, 2010 at 07:07:40PM +0400, Cyrill Gorcunov wrote:
> Steven reported
> |
> | I'm getting:
> |
> | Pid: 3477, comm: perf Not tainted 2.6.34-rc6 #2727
> | Call Trace:
> |  [<ffffffff811c7565>] debug_smp_processor_id+0xd5/0xf0
> |  [<ffffffff81019874>] p4_hw_config+0x2b/0x15c
> |  [<ffffffff8107acbc>] ? trace_hardirqs_on_caller+0x12b/0x14f
> |  [<ffffffff81019143>] hw_perf_event_init+0x468/0x7be
> |  [<ffffffff810782fd>] ? debug_mutex_init+0x31/0x3c
> |  [<ffffffff810c68b2>] T.850+0x273/0x42e
> |  [<ffffffff810c6cab>] sys_perf_event_open+0x23e/0x3f1
> |  [<ffffffff81009e6a>] ? sysret_check+0x2e/0x69
> |  [<ffffffff81009e32>] system_call_fastpath+0x16/0x1b
> |
> | When running perf record in latest tip/perf/core
> |
> 
> Due to the fact that p4 counters are shared between HT threads
> we synthetically divide the whole set of counters into two
> non-intersected subsets. And while we're borrowing counters
> from these subsets we should not be preempted. So use
> get_cpu/put_cpu pair.
> 
> Reported-by: Steven Rostedt <rostedt@...dmis.org>
> Tested-by: Steven Rostedt <rostedt@...dmis.org>
> CC: Steven Rostedt <rostedt@...dmis.org>
> CC: Peter Zijlstra <peterz@...radead.org>
> CC: Ingo Molnar <mingo@...e.hu>
> CC: Frederic Weisbecker <fweisbec@...il.com>
> Signed-off-by: Cyrill Gorcunov <gorcunov@...nvz.org>
> ---
>  arch/x86/kernel/cpu/perf_event_p4.c |    9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> Index: linux-2.6.git/arch/x86/kernel/cpu/perf_event_p4.c
> =====================================================================
> --- linux-2.6.git.orig/arch/x86/kernel/cpu/perf_event_p4.c
> +++ linux-2.6.git/arch/x86/kernel/cpu/perf_event_p4.c
> @@ -421,7 +421,7 @@ static u64 p4_pmu_event_map(int hw_event
>  
>  static int p4_hw_config(struct perf_event *event)
>  {
> -	int cpu = raw_smp_processor_id();
> +	int cpu = get_cpu();
>  	u32 escr, cccr;
>  
>  	/*
> @@ -440,7 +440,7 @@ static int p4_hw_config(struct perf_even
>  		event->hw.config = p4_set_ht_bit(event->hw.config);
>  
>  	if (event->attr.type != PERF_TYPE_RAW)
> -		return 0;
> +		goto out;
>  
>  	/*
>  	 * We don't control raw events so it's up to the caller
> @@ -455,6 +455,8 @@ static int p4_hw_config(struct perf_even
>  		(p4_config_pack_escr(P4_ESCR_MASK_HT) |
>  		 p4_config_pack_cccr(P4_CCCR_MASK_HT));
>  
> +out:
> +	put_cpu();
>  	return 0;
>  }
>  
> @@ -741,7 +743,7 @@ static int p4_pmu_schedule_events(struct
>  {
>  	unsigned long used_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
>  	unsigned long escr_mask[BITS_TO_LONGS(ARCH_P4_TOTAL_ESCR)];
> -	int cpu = raw_smp_processor_id();
> +	int cpu = get_cpu();
>  	struct hw_perf_event *hwc;
>  	struct p4_event_bind *bind;
>  	unsigned int i, thread, num;
> @@ -777,6 +779,7 @@ reserve:
>  	}
>  
>  done:
> +	put_cpu();
>  	return num ? -ENOSPC : 0;
>  }



That's no big deal. But I think the schedule_events() is called on
pmu::enable() time, when preemption is already disabled.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ