lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <853796e3-fd44-4fc2-8fd2-5810342a6ebe@linux.alibaba.com>
Date: Sat, 22 Nov 2025 20:24:40 +0800
From: Gao Xiang <hsiangkao@...ux.alibaba.com>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Yuwen Chen <ywen.chen@...mail.com>, akpm@...ux-foundation.org,
 bgeffon@...gle.com, licayy@...look.com, linux-block@...r.kernel.org,
 linux-kernel@...r.kernel.org, linux-mm@...ck.org, minchan@...nel.org,
 richardycc@...gle.com
Subject: Re: [RFC PATCHv5 0/6] zram: introduce writeback bio batching



On 2025/11/22 18:07, Sergey Senozhatsky wrote:
> On (25/11/21 20:21), Gao Xiang wrote:
>>>>> I think page-fault latency of a written-back page is expected to be
>>>>> higher, that's a trade-off that we agree on.  Off the top of my head,
>>>>> I don't think we can do anything about it.
>>>>>
>>>>> Is loop device always used as for writeback targets?
>>>>
>>>> On the Android platform, currently only the loop device is supported as
>>>> the backend for writeback, possibly for security reasons. I noticed that
>>>> EROFS has implemented a CONFIG_EROFS_FS_BACKED_BY_FILE to reduce this
>>>> latency. I think ZRAM might also be able to do this.
>>>
>>> I see.  Do you use S/W or H/W compression?
>>
>> No, I'm pretty sure it's impossible for zram to access
>> file I/Os without another thread context (e.g. workqueue),
>> especially for write I/Os, which is unlike erofs:
>>
>> EROFS can do because EROFS is a specific filesystem, you
>> could see it's a seperate fs, and it can only read (no
>> write context) backing files in erofs and/or other fses,
>> which is much like vfs/overlayfs read_iter() directly
>> going into the backing fses without nested contexts.
>> (Even if loop is used, it will create its own thread
>> contexts with workqueues, which is safe.)
>>
>>   In the other hand, zram/loop can act as a virtual block
>> device which is rather different, which means you could
>> format an ext4 filesystem and backing another ext4/btrfs,
>> like this:
>>
>>    zram(ext4) -> backing ext4/btrfs
>>
>> It's unsafe (in addition to GFP_NOIO allocation
>> restriction) since zram cannot manage those ext4/btrfs
>> existing contexts:
>>
>>   - Take one detailed example, if the upper zram ext4
>> assigns current->journal_info = xxx, and submit_bio() to
>> zram, which will confuse the backing ext4 since it should
>> assume current->journal_info == NULL, so the virtual block
>> devices need another thread context to isolate those two
>> different uncontrolled contexts.
>>
>> So I don't think it's feasible for block drivers to act
>> like this, especially mixing with writing to backing fses
>> operations.
> 
> Sorry, I don't completely understand your point, but backing
> device is never expected to have any fs on it.  So from your
> email:

zram(ext4) means zram device itself is formated as ext4.

> 
>> zram(ext4) -> backing ext4/btrfs
> 
> This is not a valid configuration, as far as I'm concerned.
> Unless I'm missing your point.

Why it's not valid? zram can be used as a regular virtual
block device, and format with any fs, and mount the zram
then.

Thanks,
Gao Xiang


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ