[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210317014253epcms2p1f45db6a281645282e1540e0070999d73@epcms2p1>
Date: Wed, 17 Mar 2021 10:42:53 +0900
From: Daejun Park <daejun7.park@...sung.com>
To: Can Guo <cang@...eaurora.org>,
Daejun Park <daejun7.park@...sung.com>
CC: Greg KH <gregkh@...uxfoundation.org>,
"avri.altman@....com" <avri.altman@....com>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"asutoshd@...eaurora.org" <asutoshd@...eaurora.org>,
"stanley.chu@...iatek.com" <stanley.chu@...iatek.com>,
"bvanassche@....org" <bvanassche@....org>,
"huobean@...il.com" <huobean@...il.com>,
ALIM AKHTAR <alim.akhtar@...sung.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
JinHwan Park <jh.i.park@...sung.com>,
Javier Gonzalez <javier.gonz@...sung.com>,
SEUNGUK SHIN <seunguk.shin@...sung.com>,
Sung-Jun Park <sungjun07.park@...sung.com>,
Jinyoung CHOI <j-young.choi@...sung.com>,
BoRam Shin <boram.shin@...sung.com>
Subject: RE: Re: [PATCH v29 4/4] scsi: ufs: Add HPB 2.0 support
>On 2021-03-15 15:23, Can Guo wrote:
>> On 2021-03-15 15:07, Daejun Park wrote:
>>>>> This patch supports the HPB 2.0.
>>>>>
>>>>> The HPB 2.0 supports read of varying sizes from 4KB to 512KB.
>>>>> In the case of Read (<= 32KB) is supported as single HPB read.
>>>>> In the case of Read (36KB ~ 512KB) is supported by as a combination
>>>>> of
>>>>> write buffer command and HPB read command to deliver more PPN.
>>>>> The write buffer commands may not be issued immediately due to busy
>>>>> tags.
>>>>> To use HPB read more aggressively, the driver can requeue the write
>>>>> buffer
>>>>> command. The requeue threshold is implemented as timeout and can be
>>>>> modified with requeue_timeout_ms entry in sysfs.
>>>>>
>>>>> Signed-off-by: Daejun Park <daejun7.park@...sung.com>
>>>>> ---
>>>>> +static struct attribute *hpb_dev_param_attrs[] = {
>>>>> + &dev_attr_requeue_timeout_ms.attr,
>>>>> + NULL,
>>>>> +};
>>>>> +
>>>>> +struct attribute_group ufs_sysfs_hpb_param_group = {
>>>>> + .name = "hpb_param_sysfs",
>>>>> + .attrs = hpb_dev_param_attrs,
>>>>> +};
>>>>> +
>>>>> +static int ufshpb_pre_req_mempool_init(struct ufshpb_lu *hpb)
>>>>> +{
>>>>> + struct ufshpb_req *pre_req = NULL;
>>>>> + int qd = hpb->sdev_ufs_lu->queue_depth / 2;
>>>>> + int i, j;
>>>>> +
>>>>> + INIT_LIST_HEAD(&hpb->lh_pre_req_free);
>>>>> +
>>>>> + hpb->pre_req = kcalloc(qd, sizeof(struct ufshpb_req),
>>>>> GFP_KERNEL);
>>>>> + hpb->throttle_pre_req = qd;
>>>>> + hpb->num_inflight_pre_req = 0;
>>>>> +
>>>>> + if (!hpb->pre_req)
>>>>> + goto release_mem;
>>>>> +
>>>>> + for (i = 0; i < qd; i++) {
>>>>> + pre_req = hpb->pre_req + i;
>>>>> + INIT_LIST_HEAD(&pre_req->list_req);
>>>>> + pre_req->req = NULL;
>>>>> + pre_req->bio = NULL;
>>>>
>>>> Why don't prepare bio as same as wb.m_page? Won't that save more time
>>>> for ufshpb_issue_pre_req()?
>>>
>>> It is pre_req pool. So although we prepare bio at this time, it just
>>> only for first pre_req.
>>
>> I meant removing the bio_alloc() in ufshpb_issue_pre_req() and
>> bio_put()
>> in ufshpb_pre_req_compl_fn(). bios, in pre_req's case, just hold a
>> page.
>> So, prepare 16 (if queue depth is 32) bios here, just use them along
>> with
>> wb.m_page and call bio_reset() in ufshpb_pre_req_compl_fn(). Shall it
>> work?
>>
>
>If it works, you can even have the bio_add_pc_page() called here. Later
>in
>ufshpb_execute_pre_req(), you don't need to call
>ufshpb_pre_req_add_bio_page(),
>just call ufshpb_prep_entry() once instead - it save many repeated steps
>for a
>pre_req, and you don't even need to call bio_reset() in this case, since
>for a
>bio, nothing changes after it is binded with a specific page...
Hi, Can Guo
I tried the idea that you suggested, but it doesn't work properly.
This optimization should be done next time for enhancement.
Thanks
Daejun
>Can Guo.
>
>> Thanks,
>> Can Guo.
>>
>>> After use it, it should be prepared bio at issue phase.
>>>
>>> Thanks,
>>> Daejun
>>>
>>>>
>>>> Thanks,
>>>> Can Guo.
>>>>
>>>>> +
>>>>> + pre_req->wb.m_page = alloc_page(GFP_KERNEL |
>>>>> __GFP_ZERO);
>>>>> + if (!pre_req->wb.m_page) {
>>>>> + for (j = 0; j < i; j++)
>>>>> +
>>>>> __free_page(hpb->pre_req[j].wb.m_page);
>>>>> +
>>>>> + goto release_mem;
>>>>> + }
>>>>> + list_add_tail(&pre_req->list_req,
>>>>> &hpb->lh_pre_req_free);
>>>>> + }
>>>>> +
>>>>> + return 0;
>>>>> +release_mem:
>>>>> + kfree(hpb->pre_req);
>>>>> + return -ENOMEM;
>>>>> +}
>>>>> +
>>>>
>>>>
>>>>
>
>
>
Powered by blists - more mailing lists