[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Yv2euLFLjl8bEaeI@xsang-OptiPlex-9020>
Date: Thu, 18 Aug 2022 10:06:48 +0800
From: Oliver Sang <oliver.sang@...el.com>
To: John Garry <john.garry@...wei.com>
CC: Damien Le Moal <damien.lemoal@...nsource.wdc.com>,
Christoph Hellwig <hch@....de>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
<linux-ide@...r.kernel.org>, <lkp@...ts.01.org>, <lkp@...el.com>,
<ying.huang@...el.com>, <feng.tang@...el.com>,
<zhengjun.xing@...ux.intel.com>, <fengwei.yin@...el.com>
Subject: Re: [ata] 0568e61225: stress-ng.copy-file.ops_per_sec -15.0%
regression
hi John,
On Wed, Aug 17, 2022 at 03:04:06PM +0100, John Garry wrote:
> On 17/08/2022 14:51, Oliver Sang wrote:
>
> Hi Oliver,
>
> > > v5.19 + 0568e61225 : 512/512
> > > v5.19 + 0568e61225 + 4cbfca5f77 : 512/512
> > > v5.19: 1280/32767
> > >
> > > They are want makes sense to me, at least.
> > >
> > > Oliver, can you confirm this? Thanks!
> > I confirm below two:
> > v5.19 + 0568e61225 : 512/512
> > v5.19: 1280/32767 (as last already reported)
>
> ack
>
> >
> > but below failed to build:
> > v5.19 + 0568e61225 + 4cbfca5f77
> >
> > build_errors:
> > - "drivers/scsi/scsi_transport_sas.c:242:33: error: implicit declaration of function 'dma_opt_mapping_size'; did you mean 'dma_max_mapping_size'? [-Werror=implicit-function-declaration]"
> > - "drivers/scsi/scsi_transport_sas.c:241:24: error: 'struct Scsi_Host' has no member named 'opt_sectors'; did you mean 'max_sectors'?"
> >
> > not sure if I understand this correctly?
> > for this, I just cherry-pick 0568e61225 upon v5.19,
> > then cherry-pick 4cbfca5f77 again.
> > so my branch looks like:
> >
> > a11d8b97c3ecb8 v5.19 + 0568e61225 + 4cbfca5f77
> > 1b59440cf71f99 v5.19 + 0568e61225
> > 3d7cb6b04c3f31 (tag: v5.19,
> >
> > did I do the right thing?
>
> Sorry but I was not really interested in 4cbfca5f77 and I see where that
> build error is coming, but don't be concerned with it. However, for
> avoidance of doubt, if you have results for vanilla v6.0-rc1 then that would
> be appreciated.
for v6.0-rc1, it's still 512/512
>
> I will also send a separate patch for testing if you don't mind.
sure! we are very glad that we could help.
>
> thanks,
> John
>
Powered by blists - more mailing lists