[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66b4c377-1b17-1972-847e-207620cc9364@mailbox.org>
Date: Sun, 19 Sep 2021 16:27:22 +0000
From: Tor Vic <torvic9@...lbox.org>
To: Hans de Goede <hdegoede@...hat.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>
Cc: Kate Hsuan <hpa@...hat.com>, Jens Axboe <axboe@...nel.dk>,
Damien Le Moal <damien.lemoal@....com>,
linux-ide@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5] libata: Add ATA_HORKAGE_NO_NCQ_ON_AMD for Samsung 860
and 870 SSD.
On 19.09.21 15:27, Hans de Goede wrote:
> Hi Tor,
>
> On 9/19/21 4:24 PM, Tor Vic wrote:
>> Hi,
>>
>> I saw that v2 (?) of this patch has made it into stable, which
>> is quite reasonable given the number of bug reports.
>> Are there any plans to "enhance" this patch once sufficient data
>> on controller support/drive combinations has been collected?
>
> ATM there are no plans to limit these quirks, we have bug
> reports of queued trims being an issue over all usual chip-vendors
> of sata controllers (including more recent AMD models).
>
> Note that unless you have immediate "discard" enabled as an option
> on all layers of your storage stack (dmcrypt, device-mapper/raid,
> filesystem) then this change will not impact you at all.
Is that the "discard" mount option?
I added this to one of the partitions residing on my 860 Evo,
reverted the patch, and it still seems to work just fine.
$ mount | grep sdb
/dev/sdb1 on /mnt/vbox type ext4 (rw,nosuid,nodev,noatime,discard)
Is there another place where discard has to be enabled?
Or is there a way to check that discard is effectively enabled?
Not sure if relevant, but here are a couple of lines from the syslog:
ata4.00: 976773168 sectors, multi 1: LBA48 NCQ (depth 32), AA
[...]
ata4.00: Enabling discard_zeroes_data
Thanks!
>
> Also note that AFAIK all major distros do not enable immediate
> discard, instead relying on fstrim runs from a cronjob, which
> again means this change will not impact users of those distros.
>
> So chances are that your workload simply never triggered the issue;
> and this is the cause of everything always having worked fine for
> you.
>
> Regards,
>
> Hans
>
Powered by blists - more mailing lists