[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SN6PR04MB46409E7CE538F158387615A6FC6F0@SN6PR04MB4640.namprd04.prod.outlook.com>
Date: Tue, 30 Jun 2020 06:39:27 +0000
From: Avri Altman <Avri.Altman@....com>
To: "daejun7.park@...sung.com" <daejun7.park@...sung.com>,
Bean Huo <huobean@...il.com>,
"jejb@...ux.ibm.com" <jejb@...ux.ibm.com>,
"martin.petersen@...cle.com" <martin.petersen@...cle.com>,
"asutoshd@...eaurora.org" <asutoshd@...eaurora.org>,
"stanley.chu@...iatek.com" <stanley.chu@...iatek.com>,
"cang@...eaurora.org" <cang@...eaurora.org>,
"bvanassche@....org" <bvanassche@....org>,
"tomas.winkler@...el.com" <tomas.winkler@...el.com>,
ALIM AKHTAR <alim.akhtar@...sung.com>
CC: "linux-scsi@...r.kernel.org" <linux-scsi@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Sang-yoon Oh <sangyoon.oh@...sung.com>,
Sung-Jun Park <sungjun07.park@...sung.com>,
yongmyung lee <ymhungry.lee@...sung.com>,
Jinyoung CHOI <j-young.choi@...sung.com>,
Adel Choi <adel.choi@...sung.com>,
BoRam Shin <boram.shin@...sung.com>
Subject: RE: [RFC PATCH v3 0/5] scsi: ufs: Add Host Performance Booster
Support
Hi,
>
> Hi Bean,
> > On Mon, 2020-06-29 at 15:15 +0900, Daejun Park wrote:
> > > > Seems you intentionally ignored to give you comments on my
> > > > suggestion.
> > > > let me provide the reason.
> > >
> > > Sorry! I replied to your comment (
> > > https://protect2.fireeye.com/url?k=be575021-e3854728-be56db6e-
> 0cc47a31cdf8-
> 6c7d0e1e42762b92&q=1&u=https%3A%2F%2Flkml.org%2Flkml%2F2020%2F6%
> 2F15%2F1492),
> > > but you didn't reply on that. I thought you agreed because you didn't
> > > send
> > > any more comments.
> > >
> > >
> > > > Before submitting your next version patch, please check your L2P
> > > > mapping HPB reqeust submission logical algorithem. I have did
> > >
> > > We are also reviewing the code that you submitted before.
> > > It seems to be a performance improvement as it sends a map request
> > > directly.
> > >
> > > > performance comparison testing on 4KB, there are about 13%
> > > > performance
> > > > drop. Also the hit count is lower. I don't know if this is related
> > > > to
> > >
> > > It is interesting that there is actually a performance improvement.
> > > Could you share the test environment, please? However, I think
> > > stability is
> > > important to HPB driver. We have tested our method with the real
> > > products and
> > > the HPB 1.0 driver is based on that.
> >
> > I just run fio benchmark tool with --rw=randread, --bs=4kb, --
> > size=8G/10G/64G/100G. and see what performance diff with the direct
> > submission approach.
>
> Thanks!
>
> > > After this patch, your approach can be done as an incremental patch?
> > > I would
> > > like to test the patch that you submitted and verify it.
> > >
> > > > your current work queue scheduling, since you didn't add the timer
> > > > for
> > > > each HPB request.
> > >
> >
> > Taking into consideration of the HPB 2.0, can we submit the HPB write
> > request to the SCSI layer? if not, it will be a direct submission way.
> > why not directly use direct way? or maybe you have a more advisable
> > approach to work around this. would you please share with us.
> > appreciate.
>
> I am considering a direct submission way for the next version.
> We will implement the write buffer command of HPB 2.0, after patching HPB
> 1.0.
>
> As for the direct submission of HPB releated command including HPB write
> buffer, I think we'd better discuss the right approach in depth before
> moving on to the next step.
I vote to stay with the current implementation because:
1) Bean is probably right about 2.0, but it's out of scope for now -
there is a long way to go before we'll need to worry about it
2) For now, we should focus on the functional flows.
Performance issues, should such issues indeed exists, can be dealt with later. And,
3) The current code base is running in production for more than 3 years now.
I am not so eager to dump a robust, well debugged code unless it absolutely necessary.
Thanks,
Avri
Powered by blists - more mailing lists