lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 Aug 2022 15:47:23 +0800
From:   Liwei Song <liwei.song@...driver.com>
To:     ChristophHellwig <hch@....de>
Cc:     MiquelRaynal <miquel.raynal@...tlin.com>,
        RichardWeinberger <richard@....at>,
        VigneshRaghavendra <vigneshr@...com>,
        linux-mtd@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mtd_blkdevs: add mtd_table_mutex lock back to
 blktrans_{open, release} to avoid race condition



On 8/9/22 17:03, Liwei Song wrote:
> 
> 
> On 8/9/22 16:08, ChristophHellwig wrote:
>> On Tue, Aug 09, 2022 at 03:57:53PM +0800, Liwei Song wrote:
>>> without lock mtd_table_mutex in blktrans_{open, release}, there will
>>> be a race condition when access devices /dev/mtd1 and /dev/mtdblock1
>>> at the same time with a high frequency open and close test:
>>>
>>> kernel BUG at drivers/mtd/mtdcore.c:1221!
>>> lr : blktrans_release+0xb0/0x120
>>
>> This does not seem on a current mainline kernel and seems to be
>> a somewhat incomplete backtrace.  Can you send the full dmesg of
>> a latest mainline run and maybe share the reproducer?
> 
> Yes, the kernel I used is 5.15, unfortunately this is the latest version
> that worked on my board, the whole log is:
> 
> [   31.301343] ------------[ cut here ]------------
> [   31.301343] ------------[ cut here ]------------
> [   31.301365] kernel BUG at drivers/mtd/mtdcore.c:1221!
> [   31.314981] kernel BUG at drivers/mtd/mtdcore.c:1221!
> [   31.329328] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
> [   31.374117] Modules linked in: 8021q sch_fq_codel openvswitch nsh nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 fuse
> [   31.395820] CPU: 2 PID: 390 Comm: a.out Not tainted 5.15.58-yocto-standard #1
> [   31.412672] Hardware name: SoCFPGA Agilex SoCDK (DT)
> [   31.427372] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [   31.444058] pc : __put_mtd_device+0x4c/0x84
> [   31.457977] lr : put_mtd_device+0x3c/0x5c
> [   31.464122] sp : ffff80000c26bc50
> [   31.466126] x29: ffff80000c26bc50 x28: ffff000285785100 x27: 0000000000000000
> [   31.471945] x26: 0000000045585401 x25: 0000000000000000 x24: ffff000285785768
> [   31.477762] x23: ffff0002803ee520 x22: ffff0002804f8900 x21: ffff000281956800
> [   31.483580] x20: ffff000281956800 x19: ffff000281955080 x18: 0000000000000000
> [   31.489402] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
> [   31.495219] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
> [   31.501037] x11: 0000000000000000 x10: 0000000000000000 x9 : ffff8000088a79b0
> [   31.506855] x8 : ffff80000c26bcb0 x7 : 0000000000000000 x6 : 0000000000000001
> [   31.512673] x5 : ffff000280959488 x4 : 0000000000000000 x3 : 0000000000000000
> [   31.518491] x2 : ffff000281956800 x1 : 00000000ffffffff x0 : ffff000281955080
> [   31.524310] Call trace:
> [   31.525450]  __put_mtd_device+0x4c/0x84
> [   31.527979]  put_mtd_device+0x3c/0x5c
> [   31.530333]  mtdchar_close+0x3c/0x84
> [   31.532604]  __fput+0x78/0x220
> [   31.534357]  ____fput+0x1c/0x30
> [   31.536193]  task_work_run+0x88/0xc0
> [   31.538467]  do_notify_resume+0x384/0x12a0
> [   31.541261]  el0_svc+0x6c/0x80
> [   31.543015]  el0t_64_sync_handler+0xa4/0x130
> [   31.545977]  el0t_64_sync+0x1a0/0x1a4
> [   31.548338] Code: b9448841 51000421 b9048841 36ffff41 (d4210000) 
> [   31.553115] ---[ end trace 9652b26ea1d7daa1 ]---
> [   31.556420] Internal error: Oops - BUG: 0 [#2] PREEMPT SMP
> [   31.556420] note: a.out[390] exited with preempt_count 1
> [   31.560588] Modules linked in: 8021q sch_fq_codel openvswitch nsh nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 fuse
> [   31.575265] CPU: 3 PID: 391 Comm: a.out Tainted: G      D           5.15.58-yocto-standard #1
> [   31.582466] Hardware name: SoCFPGA Agilex SoCDK (DT)
> [   31.586115] pstate: 40000005 (nZcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> [   31.591757] pc : __put_mtd_device+0x4c/0x84
> [   31.594642] lr : blktrans_release+0xb0/0x120
> [   31.597603] sp : ffff80000c22bc20
> [   31.599608] x29: ffff80000c22bc20 x28: ffff000285785e80 x27: 0000000000000000
> [   31.605428] x26: 0000000045585401 x25: 0000000000000000 x24: ffff0002857864e8
> [   31.611247] x23: ffff0002803ee520 x22: ffff0002803e3470 x21: ffff0002803e3400
> [   31.617066] x20: ffff0002803e3020 x19: ffff000281955080 x18: 0000000000000000
> [   31.622884] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
> [   31.628702] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
> [   31.634519] x11: 0000000000000000 x10: 0000000000000000 x9 : ffff8000088b0230
> [   31.640337] x8 : ffff80000c22bb90 x7 : 0000000000000000 x6 : 0000000000000001
> [   31.646155] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
> [   31.651973] x2 : ffff000281956800 x1 : 00000000ffffffff x0 : ffff000281955080
> [   31.657792] Call trace:
> [   31.658933]  __put_mtd_device+0x4c/0x84
> [   31.661462]  blktrans_release+0xb0/0x120
> [   31.664077]  blkdev_put+0xd4/0x210
> [   31.666175]  blkdev_close+0x34/0x50
> [   31.668355]  __fput+0x78/0x220
> [   31.670108]  ____fput+0x1c/0x30
> [   31.671943]  task_work_run+0x88/0xc0
> [   31.674217]  do_notify_resume+0x384/0x12a0
> [   31.677009]  el0_svc+0x6c/0x80
> [   31.678762]  el0t_64_sync_handler+0xa4/0x130
> [   31.681723]  el0t_64_sync+0x1a0/0x1a4
> [   31.684082] Code: b9448841 51000421 b9048841 36ffff41 (d4210000) 
> [   31.688857] ---[ end trace 9652b26ea1d7daa2 ]---
> [   31.692161] note: a.out[391] exited with preempt_count 1
> 
> the testcase  a.out is compiled from below code:
> when run the case /dev/mtd1 and /dev/mtdblock1 will be used for open and close test.
> 
> #include <stdio.h>
> #include <stdlib.h>
> #include <sys/types.h>
> #include <sys/stat.h>
> #include <fcntl.h>
> #include <string.h>
> #include <signal.h>
> #include <unistd.h>
> 
> int main(int argc, char *argv[])
> {
>     pid_t pid, pid1, pid2;
>     int fd,ret = 0;
>     int status = 0;
>     char device_char[12]="/dev/mtd";
>     char device_block[17]="/dev/mtdblock";
> 
>     strcat(device_char, argv[1]);
>     strcat(device_block, argv[1]);
> 
>     pid1 = fork();
>     if(pid1 == 0){
>         while(1){
>             fd = open(device_char, O_SYNC|O_RDWR);
>             ret = close(fd);
>         }
>     }
>     pid2 = fork();
>     if(pid2 == 0){
>         while(1){
>             fd = open(device_block, O_SYNC|O_RDWR);
>             ret = close(fd);
>         }
>     }
>     sleep(10);
>     kill(pid1, SIGKILL);
>     kill(pid2, SIGKILL);
>     pid = wait(&status);
>     pid = wait(&status);
>     return 0;
> } 
> 
>>
>>> diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
>>> index b8ae1ec14e17..147e4a11dfe4 100644
>>> --- a/drivers/mtd/mtd_blkdevs.c
>>> +++ b/drivers/mtd/mtd_blkdevs.c
>>> @@ -188,6 +188,8 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
>>>  
>>>  	kref_get(&dev->ref);
>>>  
>>> +	if (!mutex_trylock(&mtd_table_mutex))
>>> +		return ret;
>>
>> No, that's not really the solution.
>>
>> Turning the kref_get above into a kref_get_unless_zero might be better
>> path to look into.
> 
> Thanks, I will have a look at this.

Hi Christoph,

It seems this way can not stop the race to decrease/increase mtd->usecount,
the race condition is between mtdchar_{open, close}()->(get)put_mtd_device()->__(get)put_mtd_device()
and blktrans_{open,release}()-> __(get)put_mtd_device(), when operate the same device
as char device(/dev/mtd1) and block device(/dev/mtdblock1), the original fix for
this issue is 073db4a51ee4 ("mtd: fix: avoid race condition when accessing mtd->usecount"),
Could you give some suggestion about this?

Thanks,
Liwei.



> 
> Liwei.
> 
> 
>>

Powered by blists - more mailing lists