lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Jul 2007 09:49:20 +0530 (IST)
From:	Satyam Sharma <satyam@...radead.org>
To:	Rodolfo Giometti <giometti@...eenne.com>
cc:	Satyam Sharma <satyam.sharma@...il.com>,
	Chris Friesen <cfriesen@...tel.com>,
	David Woodhouse <dwmw2@...radead.org>,
	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: LinuxPPS & spinlocks

Hi Rodolfo,


On Sun, 29 Jul 2007, Rodolfo Giometti wrote:

> On Sat, Jul 28, 2007 at 02:17:24AM +0530, Satyam Sharma wrote:
> > 
> > I only glanced through the code, so could be wrong, but I noticed that
> > the only global / shared data you have in there is a global "pps_source"
> > array of pps_s structs. That's accessed / modified from the various
> > syscalls introduced in the API exported to userspace, as well as the
> > register/unregister/pps_event API exported to in-kernel client subsystems,
> > yes? So it looks like you need to introduce proper locking for it, simply
> > type-qualifying it as "volatile" is not enough.
> > 
> > However, I think you've introduced two locks for it. The syscalls (that
> > run in process context, obviously) seem to use a pps_mutex and
> > pps_event() seems to be using the pps_lock spinlock (because that
> > gets executed from interrupt context) -- and from the looks of it, the
> > register/unregister functions are using /both/ the mutex and spinlock (!)
> 
> This is right.
> 
> > This isn't quite right, (in fact there's nothing to protect pps_event from
> > racing against a syscall), so you should use *only* the spinlock for
> > synchronization -- the spin_lock_irqsave/restore() variants, in fact.
> 
> We can't use the spin_lock_irqsave/restore() variants since PPS
> sources cannot manage IRQ enable/disable. For instance, the serial
> source doesn't manage IRQs directly but just uses it to record PPS
> events. The serial driver manages the IRQ enable/disable, not the PPS
> source which only uses the IRQ handler to records events.

Hmm? I still don't see why you can't introduce spin_lock_irqsave/restore()
in pps_event() around the access to pps_source.


> About using both mutex and spinlock I did it since (I think) I should
> protect syscalls from each others and from pps_register/unregister(),
> and pps_event() against pps_register/unregister().

Nopes, it's not about protecting code from each other, you're needlessly
complicating things. Locking is pretty simple, really -- any shared data,
that can be concurrently accessed by multiple threads (or from interrupts)
must be protected with a lock. Note that *data* is protected by a lock,
and not "code" that handles it (well, this is the kind of behaviour most
cases need, at least, including yours).

So here we're introducing the lock to protect *pps_source*, and not keep
*threads* of execution from stepping over each other. So, simply, just
ensure you grab the lock whenever you want to start accessing the shared
data, and release it when you're done.

The _irqsave/restore() variants are required because (say) one of the
syscalls executing in process context grabs the spinlock. Then, before it
has released it, it gets interrupted and pps_event() begins executing.
Now pps_event() also wants to grab the lock, but the syscall already
has it, so will continue spinning and deadlock!


> > [ Also, have you considered making pps_source a list and not an array?
> > It'll help you lose a whole lot of MAX_SOURCES, pps_is_allocated, etc
> > kind of gymnastics in there, and you _can_ return a pointer to the
> > corresponding pps source struct from the register() function to the in-kernel
> > users, so that way you get to retain the O(1) access to the corresponding
> > source when a client calls into pps_event(), similar to how you're using the
> > array index presently. ]
> > 
> > I also noticed code like (from pps_event):
> > 
> > +	/* Try to grab the lock, if not we prefere loose the event... */
> > +	if (!spin_trylock(&pps_lock))
> > +		return;
> > 
> > which looks worrisome and unnecessary. That spinlock looks to be of
> > fine enough granularity to me, do you think there'd be any contention
> > on it? I /think/ you can simply make that a spin_lock().
> 
> This is due the fact I cannot manage IRQ enable/disable.

What I meant is that you could make it a proper spin_lock() --
or spin_lock_irqsave(), actually -- instead of the _trylock_ variant
that it currently is.

I think you're unnecessarily worrying about contention here -- you can
have multiple locks (one for the list, and separate ones for your sources)
if you're really worrying about contention -- or probably rwlocks. But
really, rwlocks would end up being *slower* than spinlocks, unless the
contention is really heavy and it helps to keep multiple readers in the
critical section. But frankly, with at max a few (I'd expect generally
one) PPS sources ever to be connected / registered with teh system, and
just one-pulse-per-second, I don't see why any contention is ever gonna
happen.


Satyam
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ