[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAL1qeaF2RtC--K06qWkMRjBAOzoVfxi845RwZX05QwXS66Ns9Q@mail.gmail.com>
Date: Thu, 27 Feb 2014 18:32:53 -0800
From: Andrew Bresticker <abrestic@...omium.org>
To: Mark Brown <broonie@...nel.org>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] regmap: don't use spinlocks with REGCACHE_{RBTREE,COMPRESSED}
On Thu, Feb 27, 2014 at 3:37 AM, Mark Brown <broonie@...nel.org> wrote:
> On Wed, Feb 26, 2014 at 07:50:57PM -0800, Andrew Bresticker wrote:
>> Both REGCACHE_RBTREE and REGCACHE_COMPRESSED make GFP_KERNEL allocations
>> with the regmap lock held. If we're initializing a regmap which would
>> normally use a spinlock (e.g. MMIO), fall back to using a mutex if one
>> of these caching types is to be used.
>
> Have all the users been audited to verify that they're actually safe
> with this? I just took a quick look at the Tegra drivers and they're
> doing regmap operations in their trigger operations which is done from
> atomic context so they should run into problems trying to take mutexes
> anyway.
Oops, you're right, I didn't look through all the regmap operations
carefully enough. It is in fact the Tegra drivers I've ran into this
issue with.
> I think we need to either ensure that all users allocate their caches at
> probe time (which is fine and is hopefully what the current users are
> doing), provide a mechanism for them to do cache allocations outside of
> the spinlock (which sounds hairy) or convert them to flat cache.
Allocations are made in the write path for rbtree and in both the read
and write paths for lzo. I suppose we can pre-allocate everything at
init time, but I haven't looked too much at that. For at least the
Tegra drivers, the best option for now appears to be to convert to a
flat cache.
Thanks,
Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists