lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 21 Aug 2008 09:02:53 -0600 (MDT)
From:	jmerkey@...fmountaingroup.com
To:	"Stefan Richter" <stefanr@...6.in-berlin.de>
Cc:	jmerkey@...fmountaingroup.com, paulmck@...ux.vnet.ibm.com,
	"Peter Zijlstra" <peterz@...radead.org>,
	linux-kernel@...r.kernel.org,
	"Linus Torvalds" <torvalds@...ux-foundation.org>,
	"Nick Piggin" <nickpiggin@...oo.com.au>,
	"David Howells" <dhowells@...hat.com>
Subject: Re: [ANNOUNCE] mdb: Merkey's Linux Kernel Debugger      2.6.27-rc4 
          released

> jmerkey@...fmountaingroup.com wrote:
> [Stefan Richter wrote]
>>> I'm having a quick look at mdb-2.6.27-rc4-ia32-08-20-08.patch at the
>>> moment.  Speaking of debug_lock()...:
> ...
>>> Major problem:  rlock->flags is wrong in this call.  Use an on-stack
>>> flags variable for the initial spin_trylock_irqsave.  Ditto in the
>>> following call of spin_trylock_irqsave.
>>>
>>> Next major problem with debug_lock() and debug_unlock():  The reference
>>> counting doesn't work.  You need an atomic_t counter.  Have a look at
>>> the struct kref accessors for example, or even make use of the kref
>>> API.
>>> Or if it isn't feasible to fix with atomic_t, add a second spinlock to
>>> rlock_t to ensure integrity of .count (and of the .processor if
>>> necessary).
> ...
>> The code works in debug lock provided this memory location is actually
>> SHARED between the processors.  The various race conditions you describe
>> are valid cases, but he only modifier of .count and .lock is the
>> processor
>> that obtains the spinlock -- the rest are readers.  This code works
>> well,
>> of course, when this memory location is actually SHARED between the
>> processors and the read/write operations serialized.
>>
>> Even in SMP, at various times it is necessary for the processors to
>> perform serializing operations.  You cannot in checker-scoreboard land
>> for
>> everything.
>
> Regarding rlock->count, I stand corrected:  It works correctly if the
> debug_lock()...debug_unlock() region can be entered by up to two
> contexts and if the second one to enter cannot be preempted by the first
> one.
>
> However, since these regions are enclosed in preempt_disable/enable and
> since the first one to grab the rlock->lock keeps local interrupts
> disabled until debug_unlock() or even longer, there is always only a
> single context in the debug_lock()...debug_unlock() region --- which
> defeats the whole purpose of the rlock_t.  Or what am I missing /now/?
>
> Independently of this, you cannot use rlock->flags like you currently
> do.  spin_trylock_irqsave would overwrite the flags of CPU A by the
> flags of CPU B, making the results of spin_unlock_irqrestore in
> debug_unlock unpredictable.


Hmmm....

Yep, that may be correct.  I'll fix that one since it is outside of the
lock.    At this point, the flags in the processors are all going to be
identical at this point in the code so it would not cause any problems
(local debugger state would not care), so the debugger would work with
this and does.  But I will change that and repost the patch this evening.

You have good eyes by friend.

Jeff
> --
> Stefan Richter
> -=====-==--- =--- =-=-=
> http://arcgraph.de/sr/
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ