[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a29dcfa5-bb6a-d3b9-9fb7-cce87a8f8620@huaweicloud.com>
Date: Thu, 18 May 2023 20:16:47 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Yu Kuai <yukuai1@...weicloud.com>,
Pradeep P V K <quic_pragalla@...cinc.com>, axboe@...nel.dk
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
"yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH V1] block: Fix null pointer dereference issue on struct
io_cq
Hi,
在 2023/05/17 16:58, Yu Kuai 写道:
> Hi,
>
> 在 2023/05/17 16:44, Pradeep P V K 写道:
>> There is a potential race between ioc_clear_fn() and
>> exit_io_context() as shown below, due to which below
>> crash is observed. It can also result into use-after-free
>> issue.
>>
>> context#1: context#2:
>> ioc_release_fn() do_exit();
>> ->spin_lock(&ioc->lock); ->exit_io_context();
>> ->ioc_destroy_icq(icq); ->ioc_exit_icqs();
>> ->list_del_init(&icq->q_node); ->spin_lock_irq(&ioc->lock);
>> ->call_rcu(&icq->__rcu_head,
>> icq_free_icq_rcu);
>> ->spin_unlock(&ioc->lock);
I think above concurrent scenario is not possible as well.
exit_io_context() doesn't release ioc refcount before ioc_exit_icqs() is
done, so that ioc_release_fn() can never concurrent with
ioc_exit_icqs().
do_exit
exit_io_context
ioc_exit_icqs
put_io_context -> ioc_release_fn won't be called before this
>> ->ioc_exit_icq(); gets the same
>> icq
> I don't understand how is this possible, the list is protected by
> 'ioc->lock', both hlist_del_init and hlist_for_each_entry are called
> inside the lock.
>
> Thanks,
> Kuai
>> ->bfq_exit_icq();
>> This results into below crash as bic
>> is NULL as it is derived from icq.
>> There is a chance that icq could be
>> free'd as well.
>>
>> [33.245722][ T8666] Unable to handle kernel NULL pointer dereference
>> at virtual address 0000000000000018.
>> ...
>> Call trace:
>> [33.325782][ T8666] bfq_exit_icq+0x28/0xa8
>> [33.325785][ T8666] exit_io_context+0xcc/0x100
>> [33.325786][ T8666] do_exit+0x764/0xa58
>> [33.325791][ T8666] do_group_exit+0x0/0xa0
>> [33.325793][ T8666] invoke_syscall+0x48/0x114
>> [33.325802][ T8666] el0_svc_common+0xcc/0x118
>> [33.325805][ T8666] do_el0_svc+0x34/0xd0
>> [33.325807][ T8666] el0_svc+0x38/0xd0
>> [33.325812][ T8666] el0t_64_sync_handler+0x8c/0xfc
>> [33.325813][ T8666] el0t_64_sync+0x1a0/0x1a4
>>
>> Fix this by checking with ICQ_DESTROYED flags in ioc_exit_icqs().
>> Also, ensure ioc_exit_icq() is accessing icq within rcu_read_lock/unlock
>> so that icq doesn't get free'd up while it is still using it.
>>
>> Signed-off-by: Pradeep P V K <quic_pragalla@...cinc.com>
>> ---
>> block/blk-ioc.c | 8 ++++++--
>> 1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/block/blk-ioc.c b/block/blk-ioc.c
>> index 63fc02042408..1aa34fd46ac8 100644
>> --- a/block/blk-ioc.c
>> +++ b/block/blk-ioc.c
>> @@ -60,10 +60,14 @@ static void ioc_exit_icqs(struct io_context *ioc)
>> {
>> struct io_cq *icq;
>> + rcu_read_lock();
>> spin_lock_irq(&ioc->lock);
>> - hlist_for_each_entry(icq, &ioc->icq_list, ioc_node)
>> - ioc_exit_icq(icq);
>> + hlist_for_each_entry(icq, &ioc->icq_list, ioc_node) {
>> + if (!(icq->flags & ICQ_DESTROYED))
By the way, above change doesn't make sense to me as well.
ioc_exit_icq() is called before setting ICQ_DESTROYED, hence if
ICQ_DESTROYED is set, then ICQ_EXITED is set as well, in this case
ioc_exit_icq() won't do anything.
>> + ioc_exit_icq(icq);
>> + }
>> spin_unlock_irq(&ioc->lock);
>> + rcu_read_unlock();
>> }
>> /*
I think I do found a problem, but I'm not sure it's the same in your
case, can you try the following patch?
diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index 63fc02042408..37a56f2bb040 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -78,6 +78,9 @@ static void ioc_destroy_icq(struct io_cq *icq)
lockdep_assert_held(&ioc->lock);
+ if (icq->flags & ICQ_DESTROYED)
+ return;
+
radix_tree_delete(&ioc->icq_tree, icq->q->id);
hlist_del_init(&icq->ioc_node);
list_del_init(&icq->q_node);
@@ -128,12 +131,7 @@ static void ioc_release_fn(struct work_struct *work)
spin_lock(&q->queue_lock);
spin_lock(&ioc->lock);
- /*
- * The icq may have been destroyed when the ioc lock
- * was released.
- */
- if (!(icq->flags & ICQ_DESTROYED))
- ioc_destroy_icq(icq);
+ ioc_destroy_icq(icq);
spin_unlock(&q->queue_lock);
rcu_read_unlock();
@@ -173,18 +171,17 @@ void ioc_clear_queue(struct request_queue *q)
{
LIST_HEAD(icq_list);
+ rcu_read_lock();
spin_lock_irq(&q->queue_lock);
list_splice_init(&q->icq_list, &icq_list);
spin_unlock_irq(&q->queue_lock);
- rcu_read_lock();
while (!list_empty(&icq_list)) {
struct io_cq *icq =
list_entry(icq_list.next, struct io_cq, q_node);
spin_lock_irq(&icq->ioc->lock);
- if (!(icq->flags & ICQ_DESTROYED))
- ioc_destroy_icq(icq);
+ ioc_destroy_icq(icq);
spin_unlock_irq(&icq->ioc->lock);
}
rcu_read_unlock();
Powered by blists - more mailing lists