[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49zj2qwod1.fsf@segfault.boston.devel.redhat.com>
Date: Mon, 20 Jul 2015 16:44:26 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: Christoph Hellwig <hch@...radead.org>,
linux-kernel@...r.kernel.org, dmilburn@...hat.com
Subject: Re: [patch] Revert "block: remove artifical max_hw_sectors cap"
Jens Axboe <axboe@...nel.dk> writes:
> On 07/20/2015 01:17 PM, Jeff Moyer wrote:
>>
>> <resent with Jens' email address fixed>
>>
>> Hi,
>>
>> This reverts commit 34b48db66e08, which caused significant iozone
>> performance regressions and uncovered a silent data corruption
>> bug in at least one disk.
>>
>> For SAN storage, we've seen initial write and re-write performance drop
>> 25-50% across all I/O sizes. On locally attached storage, we've seen
>> regressions of 40% for all I/O types, but only for I/O sizes larger than
>> 1MB.
>
> Do we have any understanding of where this regression is coming from?
> Even just basic info like iostats from a run would be useful.
I'll request this information and get back to you. Sorry, I should have
done more digging first, but this seemed somewhat urgent to me.
>> In addition to the performance issues, we've also seen data corruption
>> on one disk/hba combination. See
>> http://marc.info/?l=linux-ide&m=143680539400526&w=2
>
> That's just sucky hardware... That said, it is indeed one of the
> risks. We had basically the same transition from 255 as max sectors,
> since we depended on ATA treating 0 == 256 sectors (as per spec).
Sure, the hardware sucks. I still don't like foisting silent data
corruption on users. Besides, given that this patch went in without any
performance numbers attached, I'd say the risk/reward ratio right now is
in favor of the revert.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists