[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a781481a0706221050y6c7adba6l27d11e35db69fc31@mail.gmail.com>
Date: Fri, 22 Jun 2007 23:20:04 +0530
From: "Satyam Sharma" <satyam.sharma@...il.com>
To: "Florin Iucha" <florin@...ha.net>
Cc: "Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>
Subject: Re: "upping" a semaphore from interrupt context?
Hi Florin,
On 6/22/07, Florin Iucha <florin@...ha.net> wrote:
> Hello,
>
> I am writing a USB driver for some custom hardware, and I need to
> synchronize between the user-space and the USB subsystem. Can I
> create a semaphore and "down" it in the reader then "up" it in the
> completion handler?
It's not exactly clear from your description what you are
"synchronizing" exactly ... if there is some shared data you want
accessed from process as well as interrupt context, the (only) safe
primitives to use are spin_lock_irqsave / spin_unlock_irqrestore.
If you simply want the process context task to block _till_ it receives
some kind of notification that some other job has been completed
(seems to be from interrupt context in your case from your
description) than I suspect "struct completion" and associated
primitives are what you are looking for.
> I know the completion handler runs in interrupt context so you are not
> allowed to acquire any semaphores: but can you release them? Will the
> waiting tasks wake up after the handler and its caller returned - IOW
> will the waking up task run in interrupt context as well?
The waiting task (say if it was blocked at wait_for_completion) would
continue to execute in the same context it was from that point onwards.
> This is with Linux 2.4 (if it makes a difference).
Whoa, it does, I would expect. I'm not sure completion handlers
exist in 2.4? If they do, they're the way to go. If not, sorry, I'm not
well versed with the 2.4 kernel at all.
Satyam
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists