[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55B8E6D6.7090809@kernel.dk>
Date: Wed, 29 Jul 2015 08:44:38 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Jeff Moyer <jmoyer@...hat.com>, Ming Lei <ming.lei@...onical.com>,
Sam Bradshaw <sbradshaw@...ron.com>,
Asai Thambi SP <asamymuthupa@...ron.com>
CC: linux-kernel@...r.kernel.org, dmilburn@...hat.com
Subject: Re: [patch|rfc] mtip32x: fix regression introduced by blk-mq per-hctx
flush
On 07/29/2015 08:22 AM, Jeff Moyer wrote:
> Hi,
>
> After commit f70ced091707 (blk-mq: support per-distpatch_queue flush
> machinery), the mtip32xx driver may oops upon module load due to walking
> off the end of an array in mtip_init_cmd. On initialization of the
> flush_rq, init_request is called with request_index >= the maximum queue
> depth the driver supports. For mtip32xx, this value is used to index
> into an array. What this means is that the driver will walk off the end
> of the array, and either oops or cause random memory corruption.
>
> The problem is easily reproduced by doing modprobe/rmmod of the mtip32xx
> driver in a loop. I can typically reproduce the problem in about 30
> seconds.
>
> Now, in the case of mtip32xx, it actually doesn't support flush/fua, so
> I think we can simply return without doing anything. In addition, no
> other mq-enabled driver does anything with the request_index passed into
> init_request(), so no other driver is affected. However, I'm not really
> sure what is expected of drivers. Ming, what did you envision drivers
> would do when initializing the flush requests?
This is really a bug in the core, we should not have to work around this
in the driver. I'll take a look at this.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists