lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 07 Aug 2008 17:49:47 +0200
From:	Stefan Richter <stefanr@...6.in-berlin.de>
To:	jmerkey@...fmountaingroup.com
CC:	linux-kernel@...r.kernel.org
Subject: Re: [ANNOUNCE] mdb-2.6.27-rc2-ia32-08-07-08.patch

jmerkey@...fmountaingroup.com wrote:
> The mdb-rc2 patch was posted this morning with the changes for a modular
> kernel debugger using kprobes.
> 
> ftp://ftp.wolfmountaingroup.org/pub/mdb/mdb-2.6.27-rc2-ia32-08-07-08.patch
> 
> Jeffrey Vernon Merkey


Quoting from this patch:

> +typedef struct _RLOCK 
> +{
> +#if defined(CONFIG_SMP)
> +    spinlock_t lock;
> +#endif
> +    unsigned long flags;
> +    unsigned long processor;
> +    unsigned long count;    
> +} rlock_t;

Is this something along the lines of a counting semaphore?  As far as I
understand its accessor functions, an rlock
  - can be taken by one CPU multiple times,
  - will block the other CPUs as long as the first CPU hasn't unlocked
    the rlock as many times as it locked it.

The accessors rspin_lock() and rspin_try_lock() peek into spinlock_t and
may therefore not be fully portable.  Also, they and rspin_unlock()
don't look SMP safe:

> +//
> +//   returns   0 - atomic lock occurred, processor assigned
> +//             1 - recusive count increased
> +//
> +
> +unsigned long rspin_lock(volatile rlock_t *rlock)
> +{
> +#if defined(CONFIG_SMP)
> +   register unsigned long proc = get_processor_id();
> +   register unsigned long retCode;
> +
> +   if (rlock->lock.raw_lock.slock && rlock->processor == proc)
> +   {
> +      rlock->count++;
> +      retCode = 1;
> +   }
> +   else
> +   {
> +      spin_lock_irqsave((spinlock_t *)&rlock->lock, rlock->flags);
> +      rlock->processor = proc;
> +      retCode = 0;
> +   }
> +   return retCode;
> +#else
> +   return 0;
> +#endif
> +}

In general, a lot can happen between the access to
rlock->lock.raw_lock.slock and the access to rlock->processor.

Even rlock->count++ in itself can go wrong.

The usage of "volatile" here looks a lot like DWIM to me.


> +volatile unsigned long debuggerActive;
> +volatile unsigned long ProcessorHold[MAX_PROCESSORS]; 
> +volatile unsigned long ProcessorState[MAX_PROCESSORS]; 
> +volatile unsigned long ProcessorMode[MAX_PROCESSORS]; 

Are these datasets shared between CPUs and do you access them without
lock protection?  (It looks like that from a quick glance.)  If yes,
instead of the volatile qualifier, can't you use memory barriers in
order to define the access semantics more clearly (and more effective)
than volatile can?

If there are interdependencies in these datasets, with which mechanisms
if not locks do you serialize critical sections?


> +void MDBInitializeDebugger(void)
> +{
...
> +   for (i=0; i < MAX_PROCESSORS; i++)
> +   {
> +      BreakMask[i] = 0;
> +      ProcessorHold[i] = 0; 
> +      ProcessorState[i] = 0; 
> +      ProcessorMode[i] = 0; 
> +   }

Not necessary, because static (and extern) variables are already
implicitly initialized to zero.
-- 
Stefan Richter
-=====-==--- =--- --===
http://arcgraph.de/sr/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ