lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48AD8835.2050208@s5r6.in-berlin.de>
Date:	Thu, 21 Aug 2008 17:22:29 +0200
From:	Stefan Richter <stefanr@...6.in-berlin.de>
To:	jmerkey@...fmountaingroup.com
CC:	paulmck@...ux.vnet.ibm.com, Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	David Howells <dhowells@...hat.com>
Subject: Re: [ANNOUNCE] mdb: Merkey's Linux Kernel Debugger      2.6.27-rc4
      released

jmerkey@...fmountaingroup.com wrote:
[Stefan Richter wrote]
>> I'm having a quick look at mdb-2.6.27-rc4-ia32-08-20-08.patch at the
>> moment.  Speaking of debug_lock()...:
...
>> Major problem:  rlock->flags is wrong in this call.  Use an on-stack
>> flags variable for the initial spin_trylock_irqsave.  Ditto in the
>> following call of spin_trylock_irqsave.
>>
>> Next major problem with debug_lock() and debug_unlock():  The reference
>> counting doesn't work.  You need an atomic_t counter.  Have a look at
>> the struct kref accessors for example, or even make use of the kref API.
>> Or if it isn't feasible to fix with atomic_t, add a second spinlock to
>> rlock_t to ensure integrity of .count (and of the .processor if
>> necessary).
...
> The code works in debug lock provided this memory location is actually
> SHARED between the processors.  The various race conditions you describe
> are valid cases, but he only modifier of .count and .lock is the processor
> that obtains the spinlock -- the rest are readers.  This code works well,
> of course, when this memory location is actually SHARED between the
> processors and the read/write operations serialized.
> 
> Even in SMP, at various times it is necessary for the processors to
> perform serializing operations.  You cannot in checker-scoreboard land for
> everything.

Regarding rlock->count, I stand corrected:  It works correctly if the
debug_lock()...debug_unlock() region can be entered by up to two
contexts and if the second one to enter cannot be preempted by the first
one.

However, since these regions are enclosed in preempt_disable/enable and
since the first one to grab the rlock->lock keeps local interrupts
disabled until debug_unlock() or even longer, there is always only a
single context in the debug_lock()...debug_unlock() region --- which
defeats the whole purpose of the rlock_t.  Or what am I missing /now/?

Independently of this, you cannot use rlock->flags like you currently
do.  spin_trylock_irqsave would overwrite the flags of CPU A by the
flags of CPU B, making the results of spin_unlock_irqrestore in
debug_unlock unpredictable.
-- 
Stefan Richter
-=====-==--- =--- =-=-=
http://arcgraph.de/sr/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ