[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20211014094259.00004ac5.zbestahu@163.com>
Date: Thu, 14 Oct 2021 09:42:59 +0800
From: Yue Hu <zbestahu@....com>
To: Gao Xiang <hsiangkao@...ux.alibaba.com>
Cc: Yue Hu <zbestahu@...il.com>, xiang@...nel.org, chao@...nel.org,
linux-erofs@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
huyue2@...ong.com, zhangwen@...ong.com
Subject: Re: [PATCH] erofs: fix the per-CPU buffer decompression for small
output size
On Thu, 14 Oct 2021 00:03:02 +0800
Gao Xiang <hsiangkao@...ux.alibaba.com> wrote:
> On Wed, Oct 13, 2021 at 09:10:05PM +0800, Yue Hu wrote:
> > Hi Xiang,
> >
> > On Wed, 13 Oct 2021 19:51:55 +0800
> > Gao Xiang <hsiangkao@...ux.alibaba.com> wrote:
> >
> > > Hi Yue,
> > >
> > > On Wed, Oct 13, 2021 at 05:29:05PM +0800, Yue Hu wrote:
> > > > From: Yue Hu <huyue2@...ong.com>
> > > >
> > > > Note that z_erofs_lz4_decompress() will return a positive value if
> > > > decompression succeeds. However, we do not copy_from_pcpubuf() due
> > > > to !ret. Let's fix it.
> > > >
> > > > Signed-off-by: Yue Hu <huyue2@...ong.com>
> > >
> > > Thanks for catching this. This is a valid issue, but it has no real
> > > impact to the current kernels since such pcluster in practice will be
> > > !inplace_io and trigger "if (nrpages_out == 1 && !rq->inplace_io) {"
> > > above for upstream strategies.
> > >
> > > Our customized lz4 implementation will return 0 if success instead, so
> > > it has no issue to our previous products as well.
> >
> > Yes, i just find the issue when i try to implement a new feature of
> > tail-packing inline compressed data. No problem in my current version.
>
> Yeah, please help update the return value of z_erofs_lz4_decompress()
> and get rid of such unneeded fast path.
OK, will update in next version.
Thanks.
>
> Thanks,
> Gao Xiang
>
> >
> > Thanks.
> >
> > >
> > > For such cases, how about updating z_erofs_lz4_decompress() to return
> > > 0 if success instead rather than outputsize. Since I'll return 0 if
> > > success for LZMA as well.
> > >
> > > Thanks,
> > > Gao Xiang
> >
Powered by blists - more mailing lists