[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3330004f-acac-81b4-e382-a17221a0a128@huawei.com>
Date: Wed, 19 Jul 2023 22:23:39 +0800
From: Zhihao Cheng <chengzhihao1@...wei.com>
To: Ard Biesheuvel <ardb@...nel.org>, Eric Biggers <ebiggers@...nel.org>
CC: <linux-crypto@...r.kernel.org>, Herbert Xu <herbert@...dor.apana.org.au>,
Kees Cook <keescook@...omium.org>, Haren Myneni <haren@...ibm.com>, Nick
Terrell <terrelln@...com>, Minchan Kim <minchan@...nel.org>, Sergey
Senozhatsky <senozhatsky@...omium.org>, Jens Axboe <axboe@...nel.dk>,
Giovanni Cabiddu <giovanni.cabiddu@...el.com>, Richard Weinberger
<richard@....at>, David Ahern <dsahern@...nel.org>, Eric Dumazet
<edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
<pabeni@...hat.com>, Steffen Klassert <steffen.klassert@...unet.com>,
<linux-kernel@...r.kernel.org>, <linux-block@...r.kernel.org>,
<qat-linux@...el.com>, <linuxppc-dev@...ts.ozlabs.org>,
<linux-mtd@...ts.infradead.org>, <netdev@...r.kernel.org>
Subject: Re: [RFC PATCH 05/21] ubifs: Pass worst-case buffer size to
compression routines
在 2023/7/19 16:33, Ard Biesheuvel 写道:
> On Wed, 19 Jul 2023 at 00:38, Eric Biggers <ebiggers@...nel.org> wrote:
>>
>> On Tue, Jul 18, 2023 at 02:58:31PM +0200, Ard Biesheuvel wrote:
>>> Currently, the ubifs code allocates a worst case buffer size to
>>> recompress a data node, but does not pass the size of that buffer to the
>>> compression code. This means that the compression code will never use
I think you mean the 'out_len' which describes the lengh of 'buf' is
passed into ubifs_decompress, which effects the result of
decompressor(eg. lz4 uses length to calculate the buffer end pos).
So, we should pass the real lenghth of 'buf'.
Reviewed-by: Zhihao Cheng <chengzhihao1@...wei.com>
>>> the additional space, and might fail spuriously due to lack of space.
>>>
>>> So let's multiply out_len by WORST_COMPR_FACTOR after allocating the
>>> buffer. Doing so is guaranteed not to overflow, given that the preceding
>>> kmalloc_array() call would have failed otherwise.
>>>
>>> Signed-off-by: Ard Biesheuvel <ardb@...nel.org>
>>> ---
>>> fs/ubifs/journal.c | 2 ++
>>> 1 file changed, 2 insertions(+)
>>>
>>> diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
>>> index dc52ac0f4a345f30..4e5961878f336033 100644
>>> --- a/fs/ubifs/journal.c
>>> +++ b/fs/ubifs/journal.c
>>> @@ -1493,6 +1493,8 @@ static int truncate_data_node(const struct ubifs_info *c, const struct inode *in
>>> if (!buf)
>>> return -ENOMEM;
>>>
>>> + out_len *= WORST_COMPR_FACTOR;
>>> +
>>> dlen = le32_to_cpu(dn->ch.len) - UBIFS_DATA_NODE_SZ;
>>> data_size = dn_size - UBIFS_DATA_NODE_SZ;
>>> compr_type = le16_to_cpu(dn->compr_type);
>>
>> This looks like another case where data that would be expanded by compression
>> should just be stored uncompressed instead.
>>
>> In fact, it seems that UBIFS does that already. ubifs_compress() has this:
>>
>> /*
>> * If the data compressed only slightly, it is better to leave it
>> * uncompressed to improve read speed.
>> */
>> if (in_len - *out_len < UBIFS_MIN_COMPRESS_DIFF)
>> goto no_compr;
>>
>> So it's unclear why the WORST_COMPR_FACTOR thing is needed at all.
>>
>
> It is not. The buffer is used for decompression in the truncation
> path, so none of this logic even matters. Even if the subsequent
> recompression of the truncated data node could result in expansion
> beyond the uncompressed size of the original data (which seems
> impossible to me), increasing the size of this buffer would not help
> as it is the input buffer for the compression not the output buffer.
> .
>
Powered by blists - more mailing lists