[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5a2047e3-5e71-141a-ec3a-2e22749d3c49@huawei.com>
Date: Mon, 13 Jun 2022 11:05:33 +0100
From: John Garry <john.garry@...wei.com>
To: Damien Le Moal <damien.lemoal@...nsource.wdc.com>,
<axboe@...nel.dk>, <jejb@...ux.ibm.com>,
<martin.petersen@...cle.com>, <brking@...ibm.com>, <hare@...e.de>,
<hch@....de>
CC: <linux-block@...r.kernel.org>, <linux-ide@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linux-scsi@...r.kernel.org>,
<chenxiang66@...ilicon.com>
Subject: Re: [PATCH RFC v2 03/18] scsi: core: Implement reserved command
handling
On 13/06/2022 10:43, Damien Le Moal wrote:
>>> Currently, that is not possible to do cleanly as there are no guarantees
>>> we can get a free tag (there is a race between block layer tag allocation
>>> and libata internal tag counting). So a reserved tag for that would be
>>> nice. We would end up with 31 IO tags at most + 1 reserved tag for NCQ
>>> commands + ATA_TAG_INTERNAL for non-NCQ. That last one would be rendered
>>> rather useless. But that also means that we kind-of go back to the days
>>> when Linux showed ATA drives max QD of 31...
>> So must the ATA_TAG_INTERNAL qc always be available for non-NCQ action
>> like EH, and that is why you cannot reuse for this internal NCQ
>> (queuable) command?
> Currently, ATA_TAG_INTERNAL is always used for non-NCQ commands. Seeing a
> qc with that tag means it is*not* NCQ.
>
> I am trying to see if I can reuse the tag from one of the commands that
> completed with that weird good status/sense data available. The problem
> though is that this all needs to be done*before* calling
> qc->complete_fn() which will free the tag. So we endup with 2 qcs that
> have the same tag, the second one (for the read log command) temporarily
> using the tag but still going through the same completion path without the
> original command fully completed yet. It is a real mess.
>
Reusing tags seems really messy, but then reserving an NCQ command seems
wasteful.
Thanks,
John
Powered by blists - more mailing lists