lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d763d340-a982-c56f-26b5-5c7045301c5b@oppo.com>
Date:   Wed, 17 Mar 2021 11:54:15 +0800
From:   Huang Jianan <huangjianan@...o.com>
To:     Chao Yu <yuchao0@...wei.com>, linux-erofs@...ts.ozlabs.org
Cc:     linux-kernel@...r.kernel.org, guoweichao@...o.com,
        zhangshiming@...o.com
Subject: Re: [PATCH v6 2/2] erofs: decompress in endio if possible


On 2021/3/16 16:26, Chao Yu wrote:
> Hi Jianan,
>
> On 2021/3/16 11:15, Huang Jianan via Linux-erofs wrote:
>> z_erofs_decompressqueue_endio may not be executed in the atomic
>> context, for example, when dm-verity is turned on. In this scenario,
>> data can be decompressed directly to get rid of additional kworker
>> scheduling overhead. Also, it makes no sense to apply synchronous
>> decompression for such case.
>
> It looks this patch does more than one things:
> - combine dm-verity and erofs workqueue
> - change policy of decompression in context of thread
>
> Normally, we do one thing in one patch, by this way, we will be 
> benefit in
> scenario of when backporting patches and bisecting problematic patch with
> minimum granularity, and also it will help reviewer to focus on reviewing
> single code logic by following patch's goal.
>
> So IMO, it would be better to separate this patch into two.
>
Thanks for the suggestion, I will send a new patch set.
> One more thing is could you explain a little bit more about why we 
> need to
> change policy of decompression in context of thread? for better 
> performance?
>
Sync decompression was introduced to get rid of additional kworker 
scheduling

overhead. But there is no such overhead in if we try to decompress 
directly in

z_erofs_decompressqueue_endio . Therefore, it  should be better to turn off

sync decompression to avoid the current thread waiting in z_erofs_runqueue.

> BTW, code looks clean to me. :)
>
> Thanks,
>
>>
>> Signed-off-by: Huang Jianan <huangjianan@...o.com>
>> Signed-off-by: Guo Weichao <guoweichao@...o.com>
>> Reviewed-by: Gao Xiang <hsiangkao@...hat.com>
>> ---
>>   fs/erofs/internal.h |  2 ++
>>   fs/erofs/super.c    |  1 +
>>   fs/erofs/zdata.c    | 15 +++++++++++++--
>>   3 files changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
>> index 67a7ec945686..fbc4040715be 100644
>> --- a/fs/erofs/internal.h
>> +++ b/fs/erofs/internal.h
>> @@ -50,6 +50,8 @@ struct erofs_fs_context {
>>   #ifdef CONFIG_EROFS_FS_ZIP
>>       /* current strategy of how to use managed cache */
>>       unsigned char cache_strategy;
>> +    /* strategy of sync decompression (false - auto, true - force 
>> on) */
>> +    bool readahead_sync_decompress;
>>         /* threshold for decompression synchronously */
>>       unsigned int max_sync_decompress_pages;
>> diff --git a/fs/erofs/super.c b/fs/erofs/super.c
>> index d5a6b9b888a5..0445d09b6331 100644
>> --- a/fs/erofs/super.c
>> +++ b/fs/erofs/super.c
>> @@ -200,6 +200,7 @@ static void erofs_default_options(struct 
>> erofs_fs_context *ctx)
>>   #ifdef CONFIG_EROFS_FS_ZIP
>>       ctx->cache_strategy = EROFS_ZIP_CACHE_READAROUND;
>>       ctx->max_sync_decompress_pages = 3;
>> +    ctx->readahead_sync_decompress = false;
>>   #endif
>>   #ifdef CONFIG_EROFS_FS_XATTR
>>       set_opt(ctx, XATTR_USER);
>> diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
>> index 6cb356c4217b..25a0c4890d0a 100644
>> --- a/fs/erofs/zdata.c
>> +++ b/fs/erofs/zdata.c
>> @@ -706,9 +706,12 @@ static int z_erofs_do_read_page(struct 
>> z_erofs_decompress_frontend *fe,
>>       goto out;
>>   }
>>   +static void z_erofs_decompressqueue_work(struct work_struct *work);
>>   static void z_erofs_decompress_kickoff(struct 
>> z_erofs_decompressqueue *io,
>>                          bool sync, int bios)
>>   {
>> +    struct erofs_sb_info *const sbi = EROFS_SB(io->sb);
>> +
>>       /* wake up the caller thread for sync decompression */
>>       if (sync) {
>>           unsigned long flags;
>> @@ -720,8 +723,15 @@ static void z_erofs_decompress_kickoff(struct 
>> z_erofs_decompressqueue *io,
>>           return;
>>       }
>>   -    if (!atomic_add_return(bios, &io->pending_bios))
>> +    if (atomic_add_return(bios, &io->pending_bios))
>> +        return;
>> +    /* Use workqueue and sync decompression for atomic contexts only */
>> +    if (in_atomic() || irqs_disabled()) {
>>           queue_work(z_erofs_workqueue, &io->u.work);
>> +        sbi->ctx.readahead_sync_decompress = true;
>> +        return;
>> +    }
>> +    z_erofs_decompressqueue_work(&io->u.work);
>>   }
>>     static bool z_erofs_page_is_invalidated(struct page *page)
>> @@ -1333,7 +1343,8 @@ static void z_erofs_readahead(struct 
>> readahead_control *rac)
>>       struct erofs_sb_info *const sbi = EROFS_I_SB(inode);
>>         unsigned int nr_pages = readahead_count(rac);
>> -    bool sync = (nr_pages <= sbi->ctx.max_sync_decompress_pages);
>> +    bool sync = (sbi->ctx.readahead_sync_decompress &&
>> +            nr_pages <= sbi->ctx.max_sync_decompress_pages);
>>       struct z_erofs_decompress_frontend f = 
>> DECOMPRESS_FRONTEND_INIT(inode);
>>       struct page *page, *head = NULL;
>>       LIST_HEAD(pagepool);
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ