[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170523200838.716784926@linuxfoundation.org>
Date: Tue, 23 May 2017 22:08:51 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Lars Ellenberg <lars.ellenberg@...bit.com>,
Jens Axboe <axboe@...com>
Subject: [PATCH 4.11 170/197] drbd: fix request leak introduced by locking/atomic, kref: Kill kref_sub()
4.11-stable review patch. If anyone has any objections, please let me know.
------------------
From: Lars Ellenberg <lars.ellenberg@...bit.com>
commit a00ebd1cf12c378a1d4f7a1d6daf1d76c1eaad82 upstream.
When killing kref_sub(), the unconditional additional kref_get()
was not properly paired with the necessary kref_put(), causing
a leak of struct drbd_requests (~ 224 Bytes) per submitted bio,
and breaking DRBD in general, as the destructor of those "drbd_requests"
does more than just the mempoll_free().
Fixes: bdfafc4ffdd2 ("locking/atomic, kref: Kill kref_sub()")
Signed-off-by: Lars Ellenberg <lars.ellenberg@...bit.com>
Signed-off-by: Jens Axboe <axboe@...com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
drivers/block/drbd/drbd_req.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)
--- a/drivers/block/drbd/drbd_req.c
+++ b/drivers/block/drbd/drbd_req.c
@@ -314,24 +314,32 @@ void drbd_req_complete(struct drbd_reque
}
/* still holds resource->req_lock */
-static int drbd_req_put_completion_ref(struct drbd_request *req, struct bio_and_error *m, int put)
+static void drbd_req_put_completion_ref(struct drbd_request *req, struct bio_and_error *m, int put)
{
struct drbd_device *device = req->device;
D_ASSERT(device, m || (req->rq_state & RQ_POSTPONED));
+ if (!put)
+ return;
+
if (!atomic_sub_and_test(put, &req->completion_ref))
- return 0;
+ return;
drbd_req_complete(req, m);
+ /* local completion may still come in later,
+ * we need to keep the req object around. */
+ if (req->rq_state & RQ_LOCAL_ABORTED)
+ return;
+
if (req->rq_state & RQ_POSTPONED) {
/* don't destroy the req object just yet,
* but queue it for retry */
drbd_restart_request(req);
- return 0;
+ return;
}
- return 1;
+ kref_put(&req->kref, drbd_req_destroy);
}
static void set_if_null_req_next(struct drbd_peer_device *peer_device, struct drbd_request *req)
@@ -518,12 +526,8 @@ static void mod_rq_state(struct drbd_req
if (req->i.waiting)
wake_up(&device->misc_wait);
- if (c_put) {
- if (drbd_req_put_completion_ref(req, m, c_put))
- kref_put(&req->kref, drbd_req_destroy);
- } else {
- kref_put(&req->kref, drbd_req_destroy);
- }
+ drbd_req_put_completion_ref(req, m, c_put);
+ kref_put(&req->kref, drbd_req_destroy);
}
static void drbd_report_io_error(struct drbd_device *device, struct drbd_request *req)
@@ -1363,8 +1367,7 @@ nodata:
}
out:
- if (drbd_req_put_completion_ref(req, &m, 1))
- kref_put(&req->kref, drbd_req_destroy);
+ drbd_req_put_completion_ref(req, &m, 1);
spin_unlock_irq(&resource->req_lock);
/* Even though above is a kref_put(), this is safe.
Powered by blists - more mailing lists