lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <11ef6a06-4b6f-44d0-af79-f96e16456b55@huawei.com>
Date:   Wed, 18 Aug 2021 10:02:09 +0800
From:   "yukuai (C)" <yukuai3@...wei.com>
To:     Ming Lei <ming.lei@...hat.com>
CC:     <axboe@...nel.dk>, <bvanassche@....org>,
        <linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
        <yi.zhang@...wei.com>, Keith Busch <kbusch@...nel.org>
Subject: Re: [PATCH RFC] blk_mq: clear rq mapping in driver tags before
 freeing rqs in sched tags

On 2021/08/18 8:52, Ming Lei wrote:
> On Tue, Aug 17, 2021 at 10:23:06AM +0800, Yu Kuai wrote:
>> If ioscheduler is not none, hctx->tags->rq[tag] will point to
>> hctx->sched_tags->static_rq[internel_tag] in blk_mq_get_driver_tag().
>> However, static_rq of sched_tags might be freed through switching
>> elevator or increasing nr_requests. Thus leave a window for some drivers
>> to get the freed request through blk_mq_tag_to_rq(tags, tag).
> 
> I believe I have explained that it is bug of driver which has knowledge
> if the passed tag is valid or not. We are clear that driver need to cover
> race between normal completion and timeout/error handling.
> 
>>
>> It's difficult to fix this uaf from driver side, I'm thinking about
> 
> So far not see any analysis on why the uaf is triggered, care to
> investigate the reason?

Hi, Ming

I'm sorry if I didn't explian the uaf clearly.

1) At first, a normal io is submitted and completed with scheduler:

internel_tag = blk_mq_get_tag -> get tag from sched_tags
  blk_mq_rq_ctx_init
   sched_tags->rq[internel_tag] = sched_tag->static_rq[internel_tag]
...
blk_mq_get_driver_tag
  __blk_mq_get_driver_tag -> get tag from tags
  tags->rq[tag] = sched_tag->static_rq[internel_tag]

So, both tags->rq[tag] and sched_tags->rq[internel_tag] are pointing
to the request: sched_tags->static_rq[internal_tag].

2) Then, if the sched_tags->static_rq is freed:

blk_mq_sched_free_requests
  blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i)
   blk_mq_clear_rq_mapping(set, tags, hctx_idx);
    -> sched_tags->rq[internel_tag] is set to null here

After switching elevator, tags->rq[tag] still point to the request
that is just freed.

3) nbd server send a reply with random tag directly:

recv_work
  nbd_read_stat
   blk_mq_tag_to_rq(tags, tag)
    rq = tags->rq[tag] -> rq is freed

Usually, nbd will get tag and send a request to server first, and then
handle the reply. However, if the request is skipped, such uaf problem
can be triggered.

> 
> The request reference has been cleared too in blk_mq_tag_update_depth():
> 
> 	blk_mq_tag_update_depth
> 		blk_mq_free_rqs
> 			blk_mq_clear_rq_mapping
> 

What I'm trying to do is to clear rq mapping in both
hctx->sched_tags->rq and hctx->tags->rq when sched_tags->static_rq
is freed. However, I forgot about the case when tags is shared in
multiple device. Thus what this patch does is clearly wrong...

So, what do you think about adding a new interface to iterate the
request in tags->rq[], find what is pointing to the
sched_tags->static_rq[], and use cmpxchg() to clear them?

Thanks
Kuai

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ