lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 18 Oct 2013 18:51:15 +0900
From:	Jaehoon Chung <jh80.chung@...sung.com>
To:	James Hogan <james.hogan@...tec.com>,
	Doug Anderson <dianders@...omium.org>
Cc:	Chris Ball <cjb@...top.org>, Seungwon Jeon <tgih.jun@...sung.com>,
	Grant Grundler <grundler@...omium.org>,
	Alim Akhtar <alim.akhtar@...sung.com>,
	Abhilash Kesavan <a.kesavan@...sung.com>,
	Tomasz Figa <tomasz.figa@...il.com>,
	Olof Johansson <olof@...om.net>,
	Sonny Rao <sonnyrao@...omium.org>,
	Bing Zhao <bzhao@...vell.com>,
	"linux-mmc@...r.kernel.org" <linux-mmc@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] mmc: dw_mmc: Protect read-modify-write of INTMASK with
 a lock

On 10/17/2013 05:23 AM, James Hogan wrote:
> Hi Doug,
> 
> On 16 October 2013 17:43, Doug Anderson <dianders@...omium.org> wrote:
>> On Wed, Oct 16, 2013 at 2:49 AM, James Hogan <james.hogan@...tec.com> wrote:
>>>> We can't just use the standard host->lock since that lock is not irq
>>>> safe and mmc_signal_sdio_irq() (called from interrupt context) calls
>>>> dw_mci_enable_sdio_irq().  Add a new irq-safe lock to protect INTMASK.
>>>>
>>>> An alternate solution to this is to punt mmc_signal_sdio_irq() to the
>>>> tasklet and then protect INTMASK modifications by the standard host
>>>> lock.  This seemed like a bit more of a high-latency change.
>>>
>>> A probably lighter-weight alternative to that alternative is to just
>>> make the existing lock irq safe. Has this been considered?
>>>
>>> I'm not entirely convinced it's worth having a separate lock rather than
>>> changing the existing one, but the patch still appears to be correct, so:
>>> Reviewed-by: James Hogan <james.hogan@...tec.com>
>>
>> I did look at that alternate solution and rejected it, but am happy to
>> send that up if people think it's better.  Here's why I rejected it:
>>
>> 1. It looks like someone has gone through quite a bit of work to _not_
>> grab the existing lock in the IRQ handler.  The IRQ handler always
>> pushes all real work over to the tasklet.  I can only assume that they
>> did this for some good reason and I'd hate to switch it without
>> knowing for sure why.
>>
>> 2. We might have performance problems if we blindly changed the
>> existing code to an IRQ-safe spinlock.  We hold the existing
>> "host->lock" spinlock for the entire duration of
>> dw_mci_tasklet_func().  That's a pretty intense chunk of code, full of
>> nested loops (some with 500ms timeouts!), mdelay(20) calls to handle
>> some errors, etc.  I say "might" because I think that the expected
>> case is that code runs pretty quickly.  I ran some brief tests on one
>> system and saw a worst case time of 170us and an common case time of
>> ~10us.  Are we OK with having interrupts disabled for that long?  Are
>> we OK with having the dw_mci interrupt handler potentially spin on a
>> spinlock for that long?
>>
> 
> Fair enough, I'm convinced now. That code did look pretty heavy when I
> looked at it too.
> 
>>
>> I don't think it would be terrible to look at reworking the way that
>> dw_mmc handles interrupts, but that seems pretty major.
Yes, Reworking is pretty major.
but if we need to rework, i think we can rework it in future.
> 
> Yeh, I suppose at least the potentially large delays are all for
> exceptional cases, so it's not a critical problem.
> 
>> BTW: is the cost of an extra lock actually that high?
> 
> I don't know tbh. In this case the spinlock usage doesn't appear to
> actually overlap.
> 
>> ...or are you talking the cost in terms of code complexity?
> 
> In this case it'd only be a space and code complexity thing I think. I
> suppose in some cases the benefit of finer-grained locking is probably
> pretty marginal, but there's a good case for it here. It might be
> worth renaming the lock to irq_lock or something like that so it's
> clear it doesn't have to protect only for INTMASK in the future - up
> to you.
It seems good that use the irq_lock than intmask_lock. (It's just naming)
> 
> Cheers
> James
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ