[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6a4511fd85e6e47c5aef22e335bb253@codeaurora.org>
Date: Mon, 15 Mar 2021 15:23:28 +0800
From: Can Guo <cang@...eaurora.org>
To: daejun7.park@...sung.com
Cc: Greg KH <gregkh@...uxfoundation.org>, avri.altman@....com,
jejb@...ux.ibm.com, martin.petersen@...cle.com,
asutoshd@...eaurora.org, stanley.chu@...iatek.com,
bvanassche@....org, huobean@...il.com,
ALIM AKHTAR <alim.akhtar@...sung.com>,
linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org,
JinHwan Park <jh.i.park@...sung.com>,
Javier Gonzalez <javier.gonz@...sung.com>,
SEUNGUK SHIN <seunguk.shin@...sung.com>,
Sung-Jun Park <sungjun07.park@...sung.com>,
Jinyoung CHOI <j-young.choi@...sung.com>,
BoRam Shin <boram.shin@...sung.com>
Subject: Re: [PATCH v29 4/4] scsi: ufs: Add HPB 2.0 support
On 2021-03-15 15:07, Daejun Park wrote:
>>> This patch supports the HPB 2.0.
>>>
>>> The HPB 2.0 supports read of varying sizes from 4KB to 512KB.
>>> In the case of Read (<= 32KB) is supported as single HPB read.
>>> In the case of Read (36KB ~ 512KB) is supported by as a combination
>>> of
>>> write buffer command and HPB read command to deliver more PPN.
>>> The write buffer commands may not be issued immediately due to busy
>>> tags.
>>> To use HPB read more aggressively, the driver can requeue the write
>>> buffer
>>> command. The requeue threshold is implemented as timeout and can be
>>> modified with requeue_timeout_ms entry in sysfs.
>>>
>>> Signed-off-by: Daejun Park <daejun7.park@...sung.com>
>>> ---
>>> +static struct attribute *hpb_dev_param_attrs[] = {
>>> + &dev_attr_requeue_timeout_ms.attr,
>>> + NULL,
>>> +};
>>> +
>>> +struct attribute_group ufs_sysfs_hpb_param_group = {
>>> + .name = "hpb_param_sysfs",
>>> + .attrs = hpb_dev_param_attrs,
>>> +};
>>> +
>>> +static int ufshpb_pre_req_mempool_init(struct ufshpb_lu *hpb)
>>> +{
>>> + struct ufshpb_req *pre_req = NULL;
>>> + int qd = hpb->sdev_ufs_lu->queue_depth / 2;
>>> + int i, j;
>>> +
>>> + INIT_LIST_HEAD(&hpb->lh_pre_req_free);
>>> +
>>> + hpb->pre_req = kcalloc(qd, sizeof(struct ufshpb_req),
>>> GFP_KERNEL);
>>> + hpb->throttle_pre_req = qd;
>>> + hpb->num_inflight_pre_req = 0;
>>> +
>>> + if (!hpb->pre_req)
>>> + goto release_mem;
>>> +
>>> + for (i = 0; i < qd; i++) {
>>> + pre_req = hpb->pre_req + i;
>>> + INIT_LIST_HEAD(&pre_req->list_req);
>>> + pre_req->req = NULL;
>>> + pre_req->bio = NULL;
>>
>> Why don't prepare bio as same as wb.m_page? Won't that save more time
>> for ufshpb_issue_pre_req()?
>
> It is pre_req pool. So although we prepare bio at this time, it just
> only for first pre_req.
I meant removing the bio_alloc() in ufshpb_issue_pre_req() and bio_put()
in ufshpb_pre_req_compl_fn(). bios, in pre_req's case, just hold a page.
So, prepare 16 (if queue depth is 32) bios here, just use them along
with
wb.m_page and call bio_reset() in ufshpb_pre_req_compl_fn(). Shall it
work?
Thanks,
Can Guo.
> After use it, it should be prepared bio at issue phase.
>
> Thanks,
> Daejun
>
>>
>> Thanks,
>> Can Guo.
>>
>>> +
>>> + pre_req->wb.m_page = alloc_page(GFP_KERNEL |
>>> __GFP_ZERO);
>>> + if (!pre_req->wb.m_page) {
>>> + for (j = 0; j < i; j++)
>>> +
>>> __free_page(hpb->pre_req[j].wb.m_page);
>>> +
>>> + goto release_mem;
>>> + }
>>> + list_add_tail(&pre_req->list_req,
>>> &hpb->lh_pre_req_free);
>>> + }
>>> +
>>> + return 0;
>>> +release_mem:
>>> + kfree(hpb->pre_req);
>>> + return -ENOMEM;
>>> +}
>>> +
>>
>>
>>
Powered by blists - more mailing lists