lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.02.1109200933310.2723@ionos>
Date:	Tue, 20 Sep 2011 10:32:34 +0200 (CEST)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Steven Rostedt <rostedt@...dmis.org>
cc:	Andi Kleen <andi@...stfloor.org>,
	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Christoph Lameter <cl@...ux.com>, Tejun Heo <htejun@...il.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [RFC][PATCH 0/5] Introduce checks for preemptable code for
 this_cpu_read/write()

On Mon, 19 Sep 2011, Steven Rostedt wrote:
> On Mon, 2011-09-19 at 19:20 -0700, Andi Kleen wrote:
> > Steven Rostedt <rostedt@...dmis.org> writes:
> > 
> > > I just found out that the this_cpu_*() functions do not perform the
> > > test to see if the usage is in atomic or not. Thus, the blind
> > > conversion of the per_cpu(*, smp_processor_id()) and the get_cpu_var()
> > > code to this_cpu_*() introduce the regression to detect the hard
> > > to find case where a per cpu variable is used in preempt code that
> > > migrates and causes bugs.

Just for the record. I added some this_cpu_* debug checks to my
filesystem eating 2.6.38-rt and guess what: They trigger right away in
the FS code and without digging deeper I'm 100% sure, that this is the
root cause of the problems I was hunting for weeks. Thanks for wasting
my time and racking my nerves.

People who remove debugability blindly have earned an one way ticket
to the Oort cloud. There is utter chaos already so they wont be
noticed at all.

> > Didn't preempt-rt recently get changed to not migrate in kernel-preempt
> > regions. How about just fixing the normal preemption to not do this
> > either.
> 
> Actually, that's part of the issue. RT has made spin_locks not migrate.
> But this has also increased the overhead of those same spinlocks. I'm
> hoping to do away with the big hammer approach (although Thomas is less
> interested in this). I would like to have areas that require per-cpu
> variables to be annotated,

Yes, annotation is definitely something which is needed badly.

Right now preempt_disable()/local_irq_disable() are used explicit or
implicit (through spin_lock*) to protect per cpu sections, but we have
no clue, where such a section really starts and ends.

In fact preempt_disable/local_irq_disable() have become the new cpu
local BKL and the per cpu stuff just happily (ab)uses that without
documenting the scope of the code sections which rely on that. It's
just nesting inside spinlocked sections at random places without
giving a clue what needs to be kept on a cpu or not.

That's what makes it basically impossible to use anything else than
the big hammer approach in RT. Nobody has the bandwidth to audit all
this stuff and I seriously doubt that we can improve that situation
unless we get proper annotation of the per cpu sections in place.

Can we please put that on the KS agenda? This definitely needs to be
addressed urgently.

> and not have every spinlock disable preemption.

That doesn't work, you're prone to deadlocks then. I guess you meant
not disable migration on RT, right?

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ