lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141120203044.GA9078@redhat.com>
Date:	Thu, 20 Nov 2014 22:30:44 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Mike Snitzer <snitzer@...hat.com>
Cc:	axboe@...nel.dk, linux-kernel@...r.kernel.org,
	martin.petersen@...cle.com, hch@...radead.org,
	rusty@...tcorp.com.au, dm-devel@...hat.com
Subject: Re: [PATCH] virtio_blk: fix defaults for max_hw_sectors and
 max_segment_size

On Thu, Nov 20, 2014 at 02:00:59PM -0500, Mike Snitzer wrote:
> virtio_blk incorrectly established -1U as the default for these
> queue_limits.  Set these limits to sane default values to avoid crashing
> the kernel.  But the virtio-blk protocol should probably be extended to
> allow proper stacking of the disk's limits from the host.
> 
> This change fixes a crash that was reported when virtio-blk was used to
> test linux-dm.git commit 604ea90641b4 ("dm thin: adjust max_sectors_kb
> based on thinp blocksize") that will initially set max_sectors to
> max_hw_sectors and then rounddown to the first power-of-2 factor of the
> DM thin-pool's blocksize.  Basically that commit assumes drivers don't
> suck when establishing max_hw_sectors so it acted like a canary in the
> coal mine.
> 
> In the case of a DM thin-pool built ontop of virtio-blk data device
> these are the insane limits that were established for the DM thin-pool:
> 
>   # cat /sys/block/dm-6/queue/max_sectors_kb
>   1073741824
>   # cat /sys/block/dm-6/queue/max_hw_sectors_kb
>   2147483647
> 
> by stacking the virtio-blk device's limits:
> 
>   # cat /sys/block/vdb/queue/max_sectors_kb
>   512
>   # cat /sys/block/vdb/queue/max_hw_sectors_kb
>   2147483647
> 
> Attempting to mkfs.xfs against a thin device from this thin-pool quickly
> resulted in fs/direct-io.c:dio_send_cur_page()'s BUG_ON.

Why exactly does it BUG_ON?
Did some memory allocation fail?

Will it still BUG_ON if host gives us high values?

If linux makes assumptions about hardware limits, won't
it be better to put them in blk core and not in
individual drivers?


> Signed-off-by: Mike Snitzer <snitzer@...hat.com>
> Cc: stable@...r.kernel.org
> ---
>  drivers/block/virtio_blk.c |    9 ++++++---
>  1 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index c6a27d5..68efbdc 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -674,8 +674,11 @@ static int virtblk_probe(struct virtio_device *vdev)
>  	/* No need to bounce any requests */
>  	blk_queue_bounce_limit(q, BLK_BOUNCE_ANY);
>  
> -	/* No real sector limit. */
> -	blk_queue_max_hw_sectors(q, -1U);
> +	/*
> +	 * Limited by disk's max_hw_sectors in host, but
> +	 * without that info establish a sane default.
> +	 */
> +	blk_queue_max_hw_sectors(q, BLK_DEF_MAX_SECTORS);

I see
drivers/usb/storage/scsiglue.c: blk_queue_max_hw_sectors(sdev->request_queue, 0x7FFFFF);

so maybe we should go higher, and use INT_MAX too?


>  
>  	/* Host can optionally specify maximum segment size and number of
>  	 * segments. */
> @@ -684,7 +687,7 @@ static int virtblk_probe(struct virtio_device *vdev)
>  	if (!err)
>  		blk_queue_max_segment_size(q, v);
>  	else
> -		blk_queue_max_segment_size(q, -1U);
> +		blk_queue_max_segment_size(q, BLK_MAX_SEGMENT_SIZE);
>  
>  	/* Host can optionally specify the block size of the device */
>  	err = virtio_cread_feature(vdev, VIRTIO_BLK_F_BLK_SIZE,

Here too, I see some drivers asking for more:
drivers/block/mtip32xx/mtip32xx.c: blk_queue_max_segment_size(dd->queue, 0x400000);




> -- 
> 1.7.4.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ