[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070729091709.GY9840@enneenne.com>
Date: Sun, 29 Jul 2007 11:17:10 +0200
From: Rodolfo Giometti <giometti@...eenne.com>
To: Satyam Sharma <satyam.sharma@...il.com>
Cc: Chris Friesen <cfriesen@...tel.com>,
David Woodhouse <dwmw2@...radead.org>,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: LinuxPPS & spinlocks
On Sat, Jul 28, 2007 at 02:17:24AM +0530, Satyam Sharma wrote:
>
> I only glanced through the code, so could be wrong, but I noticed that
> the only global / shared data you have in there is a global "pps_source"
> array of pps_s structs. That's accessed / modified from the various
> syscalls introduced in the API exported to userspace, as well as the
> register/unregister/pps_event API exported to in-kernel client subsystems,
> yes? So it looks like you need to introduce proper locking for it, simply
> type-qualifying it as "volatile" is not enough.
>
> However, I think you've introduced two locks for it. The syscalls (that
> run in process context, obviously) seem to use a pps_mutex and
> pps_event() seems to be using the pps_lock spinlock (because that
> gets executed from interrupt context) -- and from the looks of it, the
> register/unregister functions are using /both/ the mutex and spinlock (!)
This is right.
> This isn't quite right, (in fact there's nothing to protect pps_event from
> racing against a syscall), so you should use *only* the spinlock for
> synchronization -- the spin_lock_irqsave/restore() variants, in fact.
We can't use the spin_lock_irqsave/restore() variants since PPS
sources cannot manage IRQ enable/disable. For instance, the serial
source doesn't manage IRQs directly but just uses it to record PPS
events. The serial driver manages the IRQ enable/disable, not the PPS
source which only uses the IRQ handler to records events.
About using both mutex and spinlock I did it since (I think) I should
protect syscalls from each others and from pps_register/unregister(),
and pps_event() against pps_register/unregister().
> [ Also, have you considered making pps_source a list and not an array?
> It'll help you lose a whole lot of MAX_SOURCES, pps_is_allocated, etc
> kind of gymnastics in there, and you _can_ return a pointer to the
> corresponding pps source struct from the register() function to the in-kernel
> users, so that way you get to retain the O(1) access to the corresponding
> source when a client calls into pps_event(), similar to how you're using the
> array index presently. ]
>
> I also noticed code like (from pps_event):
>
> + /* Try to grab the lock, if not we prefere loose the event... */
> + if (!spin_trylock(&pps_lock))
> + return;
>
> which looks worrisome and unnecessary. That spinlock looks to be of
> fine enough granularity to me, do you think there'd be any contention
> on it? I /think/ you can simply make that a spin_lock().
This is due the fact I cannot manage IRQ enable/disable.
> Overall the code looks simple / straightforward enough to me (except for
> the parport / uart stuff that I have no clue about), and I'll also read up on
> the relevant RFC for this and would hopefully try and give you a more
> meaningful review over the weekend.
Thanks a lot for your help!
Ciao,
Rodolfo
--
GNU/Linux Solutions e-mail: giometti@...eenne.com
Linux Device Driver giometti@...dd.com
Embedded Systems giometti@...ux.it
UNIX programming phone: +39 349 2432127
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists