lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Sep 2022 13:19:44 +0300
From:   Adrian Hunter <adrian.hunter@...el.com>
To:     Ulf Hansson <ulf.hansson@...aro.org>,
        Wenchao Chen <wenchao.chen666@...il.com>
Cc:     baolin.wang@...ux.alibaba.com, zhang.lyra@...il.com,
        linux-mmc@...r.kernel.org, linux-kernel@...r.kernel.org,
        megoo.tang@...il.com, lzx.stg@...il.com
Subject: Re: [PATCH] mmc: host: Fix data stomping during mmc recovery

On 20/09/22 12:32, Ulf Hansson wrote:
> + Adrian
> 
> On Fri, 16 Sept 2022 at 11:05, Wenchao Chen <wenchao.chen666@...il.com> wrote:
>>
>> From: Wenchao Chen <wenchao.chen@...soc.com>
>>
>> The block device uses multiple queues to access emmc. There will be up to 3
>> requests in the hsq of the host. The current code will check whether there
>> is a request doing recovery before entering the queue, but it will not check
>> whether there is a request when the lock is issued. The request is in recovery
>> mode. If there is a request in recovery, then a read and write request is
>> initiated at this time, and the conflict between the request and the recovery
>> request will cause the data to be trampled.
>>
>> Signed-off-by: Wenchao Chen <wenchao.chen@...soc.com>
> 
> Looks like we should consider tagging this for stable kernels too, right?
> 
>> ---
>>  drivers/mmc/host/mmc_hsq.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/mmc/host/mmc_hsq.c b/drivers/mmc/host/mmc_hsq.c
>> index a5e05ed0fda3..9d35453e7371 100644
>> --- a/drivers/mmc/host/mmc_hsq.c
>> +++ b/drivers/mmc/host/mmc_hsq.c
>> @@ -34,7 +34,7 @@ static void mmc_hsq_pump_requests(struct mmc_hsq *hsq)
>>         spin_lock_irqsave(&hsq->lock, flags);
>>
>>         /* Make sure we are not already running a request now */
>> -       if (hsq->mrq) {
>> +       if (hsq->mrq || hsq->recovery_halt) {
> 
> This still looks a bit odd to me, but I may not fully understand the
> code, as it's been a while since I looked at this.
> 
> In particular, I wonder why the callers of mmc_hsq_pump_requests()
> need to release the spin_lock before they call
> mmc_hsq_pump_requests()? Is it because we want to allow some other
> code that may be waiting for the spin_lock to be released, to run too?

FWIW, I am not aware of any reason.

> 
> If that isn't the case, it seems better to let the callers of
> mmc_hsq_pump_requests() to keep holding the lock - and thus we can
> avoid the additional check(s). In that case, it means the
> "recovery_halt" flag has already been checked, for example.
> 
>>                 spin_unlock_irqrestore(&hsq->lock, flags);
>>                 return;
>>         }
>> --
>> 2.17.1
>>
> 
> Kind regards
> Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ