[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201103162140.34893.arnd@arndb.de>
Date: Wed, 16 Mar 2011 21:40:34 +0100
From: Arnd Bergmann <arnd@...db.de>
To: scameron@...rdog.cce.hp.com
Cc: linux-kernel@...r.kernel.org, axboe@...nel.dk, dab@...com,
mike.miller@...com
Subject: Re: Question about driver ioctl, big kernel lock, mutexes
On Wednesday 16 March 2011 20:47:26 scameron@...rdog.cce.hp.com wrote:
> On Wed, Mar 16, 2011 at 08:06:14PM +0100, Arnd Bergmann wrote:
> > Absolutely. The best way forward would be to prove that the mutex
> > I introduced is actually pointless, or alternatively change the code
> > so that the mutex is only held in the places where it is really
> > required, if any.
>
> Pretty sure the ioctl stuff is already thread safe, but I will look at it
> again.
>
> There are a couple of other places the mutex is used, open, release, I think
> one other place, that I'm less sure about, only because I haven't thought
> about them. It should be a safe assumption that nothing outside
> the driver depends on these mutexes (obviously, since they're declared
> only in the driver) so that should make it pretty easy to figure out.
Ah, right. The open/release functions use access the two usage_count
variables in an unsafe way. The mutex serializes between the two,
but there are other functions that look at the same data without
taking the mutex. It would be good to make this safer in some
way.
I think turning it into an atomic_t would be enough, unless the
two usage counts need to be strictly taken atomically or in the
context of something else.
The rest of open/close looks fine to me without the mutex.
> > When I submitted the patches, I always tried to make it clear that
> > most of the new mutexes are not required at all, but that proving
> > this for every single driver takes a lot of hard work.
> >
> > Having a mutex that should not be there causes a performance regression
> > or a deadlock, both of which are fairly easy to bisect and fix, but
> > missing a mutex because of a subtle BKL dependency would have meant
> > introducing a race condition that would be extremely hard to analyse.
> >
> > Limiting the number of concurrent I/Os submitted to a device might
> > be a good idea, but I think that's a separate discussion.
>
> Ok. Thanks for the explanation and confirmation of what I was thinking.
> I'll start working on a fix for this. May take me a little while as
> there are a bunch of other things I have to work on, but it's been
> this way since June of 2010 so far with no complaints (that I know of),
> so I guess it won't hurt too much for it to be this way for a little
> while longer.
I'm curious: what is even the intention behind the passthru ioctls?
Do you have applications that use them a lot?
Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists