lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c702f06e-b7da-92be-3c4f-5dd405600235@opensource.wdc.com>
Date:   Thu, 16 Jun 2022 11:47:03 +0900
From:   Damien Le Moal <damien.lemoal@...nsource.wdc.com>
To:     John Garry <john.garry@...wei.com>,
        Bart Van Assche <bvanassche@....org>, axboe@...nel.dk,
        jejb@...ux.ibm.com, martin.petersen@...cle.com, brking@...ibm.com,
        hare@...e.de, hch@....de
Cc:     linux-block@...r.kernel.org, linux-ide@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-scsi@...r.kernel.org,
        chenxiang66@...ilicon.com
Subject: Re: [PATCH RFC v2 03/18] scsi: core: Implement reserved command
 handling

On 6/15/22 16:35, John Garry wrote:
> On 15/06/2022 00:43, Damien Le Moal wrote:
>> On 6/15/22 03:20, Bart Van Assche wrote:
>>> On 6/13/22 00:01, Damien Le Moal wrote:
>>>> On 6/9/22 19:29, John Garry wrote:
>>>>> +    /*
>>>>> +     * This determines how many commands the HBA will set aside
>>>>> +     * for internal commands. This number will be added to
>>>>> +     * @can_queue to calcumate the maximum number of simultaneous
>>>>
>>>> s/calcumate/calculate
>>>>
>>>> But this is weird. For SATA, can_queue is 32. Having reserved commands,
>>>> that number needs to stay the same. We cannot have more than 32 tags.
>>>> I think keeping can_queue as the max queue depth with at most
>>>> nr_reserved_cmds tags reserved is better.
>>>>
>>>>> +     * commands sent to the host.
>>>>> +     */
>>>>> +    int nr_reserved_cmds;
>>>
>>> +1 for Damien's request. I also prefer to keep can_queue as the maximum
>>> queue depth, whether or not nr_reserved_cmds has been set.
>>
>> For non SATA drives, I still think that is a good idea. However, for 
>> SATA,
>> we always have the internal tag command that is special. With John's
>> change, it would have to be reserved but that means we are down to 31 max
>> QD,
> 
> My intention is to keep regular tag depth at 32 for SATA. We add an 
> extra tag as a reserved tag. Indeed, this is called a 'tag', but it's 
> just really the placeholder for what will be the ATA_TAG_INTERNAL request.
> 
> About how we set scsi_host.can_queue, in this series we set .can_queue 
> as max regular tags, and the handling is as follows:
> 
> scsi_mq_setup_tags():
> tag_set->queue_depth = shost->can_queue + shost->nr_reserved_cmds
> tag_set->reserved_tags = shost->nr_reserved_cmds
> 
> So we honour the rule that blk_mq_tag_set.queue_depth is the total tag 
> depth, including reserved.
> 
> Incidentally I think Christoph prefers to keep .can_queue at total max 
> tags including reserved:
> https://lore.kernel.org/linux-scsi/337339b7-6f4a-a25c-f11c-7f701b42d6a8@suse.de/ 
> 
> 
>> so going backward several years... That internal tag for ATA does not
>> need to be reserved since this command is always used when the drive is
>> idle and no other NCQ commands are on-going.
> 
> So do you mean that ATA_TAG_INTERNAL qc is used for other commands apart 
> from internal commands?

No. It is used only for internal commands. What I meant to say is that 
currently, internal commands are issued only on device scan, device 
revalidate and error handling. All of these phases are done with the 
device under EH with the issuing path stopped and all commands 
completed, so no regular commands can be issued. Only internal ones, non 
NCQ, using the ATA_TAG_INTERNAL. So strictly speaking, we should not 
need to reserve that internal tag at all.

> 
>>
>> So the solution to all this is a likely a little more complicated if we
>> want to keep ATA max QD to 32.
>>
> 
> thanks,
> John


-- 
Damien Le Moal
Western Digital Research

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ