[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrBHh5/FOUWXv3ho@kroah.com>
Date: Mon, 20 Jun 2022 12:10:15 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: Coly Li <colyli@...e.de>
Cc: Pavel Machek <pavel@....cz>, Pavel Machek <pavel@...x.de>,
Naresh Kamboju <naresh.kamboju@...aro.org>,
baijiaju1990@...il.com, oslab@...nghua.edu.cn,
Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
stable@...r.kernel.org, torvalds@...ux-foundation.org,
akpm@...ux-foundation.org, linux@...ck-us.net, shuah@...nel.org,
patches@...nelci.org, lkft-triage@...ts.linaro.org,
jonathanh@...dia.com, f.fainelli@...il.com,
sudipm.mukherjee@...il.com, slade@...dewatkins.com,
Daniel Latypov <dlatypov@...gle.com>,
Brendan Higgins <brendanhiggins@...gle.com>,
kunit-dev@...glegroups.com,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH 5.17 000/772] 5.17.14-rc1 review
On Sat, Jun 18, 2022 at 07:57:01PM +0800, Coly Li wrote:
>
>
> > 2022年6月18日 19:37,Pavel Machek <pavel@....cz> 写道:
> >
> > Hi!
> >
> >>>> Fixes: bc082a55d25c ("bcache: fix inaccurate io state for detached
> >>> ...
> >>>
> >>>> +++ b/drivers/md/bcache/request.c
> >>>> @@ -1107,6 +1107,12 @@ static void detached_dev_do_request(struct
> >>>> bcache_device *d, struct bio *bio,
> >>>> * which would call closure_get(&dc->disk.cl)
> >>>> */
> >>>> ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO);
> >>>> + if (!ddip) {
> >>>> + bio->bi_status = BLK_STS_RESOURCE;
> >>>> + bio->bi_end_io(bio);
> >>>> + return;
> >>>> + }
> >>>> +
> >>>> ddip->d = d;
> >>>> /* Count on the bcache device */
> >>>> ddip->orig_bdev = orig_bdev;
> >>>>
> >>>
> >>> So... for patch to make any difference, memory allocation has to fail
> >>> and ddip has to be NULL.
> >>>
> >>> Before the patch, it would oops in "ddip->d = d;". With the patch, you
> >>> do some kind of error handling. Even if it is buggy, it should not do
> >>> more harm than immediate oops.
> >>
> >> I just receive this single email and don’t have any idea for the context and what the problem is. Where can I see the whole conversation?
> >>
> >
> > Discussion happened on stable@...r.kernel.org mailing lists, archives
> > should be easily available. Copy went to lkml, too.
>
> Hi Pavel and Greg,
>
> Thanks for the hint, I see the context. I cannot tell the direct reason of the kfence regression, but it is worthy to have this patch in,
> - commit 7d6b902ea0e0 ("bcache: memset on stack variables in bch_btree_check() and bch_sectors_dirty_init()”)
>
> I am not sure whether it is directly related to the kfence issue, it corrects potential unexpected stack state in some condition. Hope it may help a bit.
Added where?
confused,
greg k-h
Powered by blists - more mailing lists