lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 25 Feb 2009 09:51:25 +0000
From:	"Metzger, Markus T" <markus.t.metzger@...el.com>
To:	"hpa@...or.com" <hpa@...or.com>,
	"mingo@...hat.com" <mingo@...hat.com>,
	"Metzger, Markus T" <markus.t.metzger@...el.com>,
	"tglx@...utronix.de" <tglx@...utronix.de>,
	"mingo@...e.hu" <mingo@...e.hu>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-tip-commits@...r.kernel.org" 
	<linux-tip-commits@...r.kernel.org>
Subject: RE: [tip:tracing/hw-branch-tracing] tracing/hw-branch-tracing:
 convert bts-tracer mutex to a spinlock

>-----Original Message-----
>From: Ingo Molnar [mailto:mingo@...e.hu]
>Sent: Wednesday, February 25, 2009 9:21 AM


>bts_hotcpu_handler() is called with irqs disabled, so using mutex_lock()
>is a no-no.
>
>All the BTS codepaths here are atomic (they do not schedule), so using
>a spinlock is the right solution.

I introduced the lock to protect against a race between bts_trace_start/stop()
and bts_hotcpu_handler().

Assuming that the hw-branch-tracer is removed and at the same time a cpu
comes online, we might be left with a disabled tracer but still trace
that new cpu.

I wonder whether a simple get/put_online_cpus() would suffice, i.e.

static void bts_trace_start(struct trace_array *tr)
{
	get_online_cpus();

 	on_each_cpu(bts_trace_start_cpu, NULL, 1);
 	trace_hw_branches_enabled = 1;

	put_online_cpus();
}



> static void trace_bts_prepare(struct trace_iterator *iter)
> {
>-	mutex_lock(&bts_tracer_mutex);
>+	spin_lock(&bts_tracer_lock);
>
> 	on_each_cpu(trace_bts_cpu, iter->tr, 1);
>
>-	mutex_unlock(&bts_tracer_mutex);
>+	spin_unlock(&bts_tracer_lock);
> }

Whereas start/stop are relatively fast, the above operation is rather
expensive. Would it make sense to use schedule_on_each_cpu() instead
of on_each_cpu()?

regards,
markus.

---------------------------------------------------------------------
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen Germany
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Douglas Lusk, Peter Gleissner, Hannes Schwaderer
Registergericht: Muenchen HRB 47456 Ust.-IdNr.
VAT Registration No.: DE129385895
Citibank Frankfurt (BLZ 502 109 00) 600119052

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ