[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <545BAD05.3050800@windriver.com>
Date: Thu, 6 Nov 2014 11:16:53 -0600
From: Chris Friesen <chris.friesen@...driver.com>
To: Jens Axboe <axboe@...nel.dk>, lkml <linux-kernel@...r.kernel.org>,
<linux-scsi@...r.kernel.org>, Mike Snitzer <snitzer@...hat.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>
Subject: Re: absurdly high "optimal_io_size" on Seagate SAS disk
On 11/06/2014 10:47 AM, Chris Friesen wrote:
> Hi,
>
> I'm running a modified 3.4-stable on relatively recent X86 server-class
> hardware.
>
> I recently installed a Seagate ST900MM0026 (900GB 2.5in 10K SAS drive)
> and it's reporting a value of 4294966784 for optimal_io_size. The other
> parameters look normal though:
>
> /sys/block/sda/queue/hw_sector_size:512
> /sys/block/sda/queue/logical_block_size:512
> /sys/block/sda/queue/max_segment_size:65536
> /sys/block/sda/queue/minimum_io_size:512
> /sys/block/sda/queue/optimal_io_size:4294966784
<snip>
> According to the manual, the ST900MM0026 has a 512 byte physical sector
> size.
>
> Is this a drive firmware bug? Or a bug in the SAS driver? Or is there
> a valid reason for a single drive to report such a huge value?
>
> Would it make sense for the kernel to do some sort of sanity checking on
> this value?
Looks like this sort of thing has been seen before, in other drives (one
of which is from the same family as my drive):
http://www.spinics.net/lists/linux-scsi/msg65292.html
http://iamlinux.technoyard.in/blog/why-is-my-ssd-disk-not-reconized-by-the-rhel6-anaconda-installer/
Perhaps the ST900MM0026 should be blacklisted as well?
Or maybe the SCSI code should do a variation on Mike Snitzer's original
patch and just ignore any values above some reasonable threshold? (And
then we could remove the blacklist on the ST900MM0006.)
Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists