[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210315070728epcms2p87136c86803afa85a441ead524130245c@epcms2p8>
Date: Mon, 15 Mar 2021 16:07:28 +0900
From: Daejun Park <daejun7.park@...sung.com>
To: Can Guo <cang@...eaurora.org>,
Daejun Park <daejun7.park@...sung.com>
CC: Greg KH <gregkh@...uxfoundation.org>,
"avri.altman@....com" <avri.altman@....com>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"asutoshd@...eaurora.org" <asutoshd@...eaurora.org>,
"stanley.chu@...iatek.com" <stanley.chu@...iatek.com>,
"bvanassche@....org" <bvanassche@....org>,
"huobean@...il.com" <huobean@...il.com>,
ALIM AKHTAR <alim.akhtar@...sung.com>,
"linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
JinHwan Park <jh.i.park@...sung.com>,
Javier Gonzalez <javier.gonz@...sung.com>,
SEUNGUK SHIN <seunguk.shin@...sung.com>,
Sung-Jun Park <sungjun07.park@...sung.com>,
Jinyoung CHOI <j-young.choi@...sung.com>,
BoRam Shin <boram.shin@...sung.com>
Subject: RE: Re: [PATCH v29 4/4] scsi: ufs: Add HPB 2.0 support
>> This patch supports the HPB 2.0.
>>
>> The HPB 2.0 supports read of varying sizes from 4KB to 512KB.
>> In the case of Read (<= 32KB) is supported as single HPB read.
>> In the case of Read (36KB ~ 512KB) is supported by as a combination of
>> write buffer command and HPB read command to deliver more PPN.
>> The write buffer commands may not be issued immediately due to busy
>> tags.
>> To use HPB read more aggressively, the driver can requeue the write
>> buffer
>> command. The requeue threshold is implemented as timeout and can be
>> modified with requeue_timeout_ms entry in sysfs.
>>
>> Signed-off-by: Daejun Park <daejun7.park@...sung.com>
>> ---
>> +static struct attribute *hpb_dev_param_attrs[] = {
>> + &dev_attr_requeue_timeout_ms.attr,
>> + NULL,
>> +};
>> +
>> +struct attribute_group ufs_sysfs_hpb_param_group = {
>> + .name = "hpb_param_sysfs",
>> + .attrs = hpb_dev_param_attrs,
>> +};
>> +
>> +static int ufshpb_pre_req_mempool_init(struct ufshpb_lu *hpb)
>> +{
>> + struct ufshpb_req *pre_req = NULL;
>> + int qd = hpb->sdev_ufs_lu->queue_depth / 2;
>> + int i, j;
>> +
>> + INIT_LIST_HEAD(&hpb->lh_pre_req_free);
>> +
>> + hpb->pre_req = kcalloc(qd, sizeof(struct ufshpb_req), GFP_KERNEL);
>> + hpb->throttle_pre_req = qd;
>> + hpb->num_inflight_pre_req = 0;
>> +
>> + if (!hpb->pre_req)
>> + goto release_mem;
>> +
>> + for (i = 0; i < qd; i++) {
>> + pre_req = hpb->pre_req + i;
>> + INIT_LIST_HEAD(&pre_req->list_req);
>> + pre_req->req = NULL;
>> + pre_req->bio = NULL;
>
>Why don't prepare bio as same as wb.m_page? Won't that save more time
>for ufshpb_issue_pre_req()?
It is pre_req pool. So although we prepare bio at this time, it just only for first pre_req.
After use it, it should be prepared bio at issue phase.
Thanks,
Daejun
>
>Thanks,
>Can Guo.
>
>> +
>> + pre_req->wb.m_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
>> + if (!pre_req->wb.m_page) {
>> + for (j = 0; j < i; j++)
>> + __free_page(hpb->pre_req[j].wb.m_page);
>> +
>> + goto release_mem;
>> + }
>> + list_add_tail(&pre_req->list_req, &hpb->lh_pre_req_free);
>> + }
>> +
>> + return 0;
>> +release_mem:
>> + kfree(hpb->pre_req);
>> + return -ENOMEM;
>> +}
>> +
>
>
>
Powered by blists - more mailing lists