lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210928063919epcms2p12ef0dfc94e6756f7bf85945522720e8f@epcms2p1>
Date:   Tue, 28 Sep 2021 15:39:19 +0900
From:   Jinyoung CHOI <j-young.choi@...sung.com>
To:     "axboe@...nel.dk" <axboe@...nel.dk>,
        "linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: [PATCH] block-map: added error handling for bio_copy_kern()

When new pages are allocated to bio through alloc_page() in
bio_copy_kern(), the pages must be freed in error handling after that.

There is little chance of an error occurring in blk_rq_append_bio(), but
in the code flow, pages additionally allocated to bio must be released.

Signed-off-by: Jinyoung Choi <j-young.choi@...sung.com>
---
 block/blk-map.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/block/blk-map.c b/block/blk-map.c
index 4526adde0156..584369a7837f 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -628,6 +628,7 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
        int reading = rq_data_dir(rq) == READ;
        unsigned long addr = (unsigned long) kbuf;
        struct bio *bio;
+       int do_copy = 0;
        int ret;

        if (len > (queue_max_hw_sectors(q) << 9))
@@ -635,8 +636,9 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
        if (!len || !kbuf)
                return -EINVAL;

-       if (!blk_rq_aligned(q, addr, len) || object_is_on_stack(kbuf) ||
-           blk_queue_may_bounce(q))
+       do_copy = !blk_rq_aligned(q, addr, len) || object_is_on_stack(kbuf) ||
+               blk_queue_may_bounce(q);
+       if (do_copy)
                bio = bio_copy_kern(q, kbuf, len, gfp_mask, reading);
        else
                bio = bio_map_kern(q, kbuf, len, gfp_mask);
@@ -648,8 +650,11 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
        bio->bi_opf |= req_op(rq);

        ret = blk_rq_append_bio(rq, bio);
-       if (unlikely(ret))
+       if (unlikely(ret)) {
+               if (do_copy)
+                       bio_free_pages(bio);
                bio_put(bio);
+       }
        return ret;
 }
 EXPORT_SYMBOL(blk_rq_map_kern);
--
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ