lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2a549537680d46b8a8b7b413be3cd05a@SC-EXCH04.marvell.com>
Date:	Mon, 31 Aug 2015 02:03:34 +0000
From:	Jialing Fu <jlfu@...vell.com>
To:	Shawn Lin <shawn.lin@...k-chips.com>,
	Ulf Hansson <ulf.hansson@...aro.org>
CC:	linux-mmc <linux-mmc@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: ~RE: [RESEND PATCH] mmc: core: fix race condition in
 mmc_wait_data_done

> [...]
>
>>>> Hi, ulf
>>>>
>>>> We find this bug on Intel-C3230RK platform for very small probability.
>>>>
>>>> Whereas I can easily reproduce this case if I add a mdelay(1) or 
>>>> longer delay as Jialing did.
>>>>
>>>> This patch seems useful to me. Should we push it forward? :)
>>>
>>>
>>> It seems like a very good idea!
>>>
>>> Should we add a fixes tag to it?
>>
>>
>> That's cool, but how to add a fixes tag?
>>
>> [Fixes] mmc: core: fix race condition in mmc_wait_data_done ?   :)
>>
>
> A fixes tag points to an old commit which introduced the bug. If we can't find one, we can add a Cc tag to "stable". Just search the git log and you will find examples.
>
> Like add one line as below?
> Fixes: 2220eedfd7ae ("mmc: fix async request mechanism for sequential 
> read scenarios")
>

That's it, Jialing. From my git blame, seems this bug has been introduced for a long time, but I feel strange that no one had captured it before you did.
[Jialing Fu] Shawn,
Yes, this bug is very hard to duplicate in my experiment.
But it happens indeed, I had suffered this bug about 2 years before I fixed it.
Totally I got bug reports 3 times and about 3~4 Ramdump files.
At first, I failed to get useful clue and even through it was DDR stability issue.

Below if my analysis:
As what I had commented in the fix patch,  only the below "LineB" still gets "wait" from "mrq" as the compiler's result, then this bug may be triggered.
If the compiler has some optimism which lineB doesn't need to fetch "wait" from "mrq" again, the issue can't happen.
  static void mmc_wait_data_done(struct mmc_request *mrq)
  {
     mrq->host->context_info.is_done_rcv = true;	//LineA
     // If below line still gets host from "mrq" as the result of
     // compiler, the panic happens as we traced.
     wake_up_interruptible(&mrq->host->context_info.wait); //LineB
  }
Also, I suspect the bug may be triggered if "IRQ" or "ICache line missing" just happen between LineA and LineB. 
Especial the "Icache missing" case, it is easier to happens than IRQ.
I disassemble my code, and find LineA and LineB's assemble codes are located in two different cache line in my fail case. If you are interesting, you can check your assemble code too. 


Anyway, I will add a fixes tag and send v2 ASAP. :)

>
> Kind regards
> Uffe
>


--
Best Regards
Shawn Lin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ