lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <02adb965-ad95-2b75-f48a-51a4b75ad88b@huaweicloud.com>
Date: Tue, 13 Aug 2024 10:21:34 +0800
From: Zhang Yi <yi.zhang@...weicloud.com>
To: yangerkun <yangerkun@...wei.com>, linux-xfs@...r.kernel.org,
 linux-fsdevel@...r.kernel.org
Cc: linux-kernel@...r.kernel.org, djwong@...nel.org, hch@...radead.org,
 brauner@...nel.org, david@...morbit.com, jack@...e.cz, willy@...radead.org,
 yi.zhang@...wei.com, chengzhihao1@...wei.com, yukuai3@...wei.com
Subject: Re: [PATCH v2 3/6] iomap: advance the ifs allocation if we have more
 than one blocks per folio

On 2024/8/12 20:47, yangerkun wrote:
> 
> 
> 在 2024/8/12 20:11, Zhang Yi 写道:
>> From: Zhang Yi <yi.zhang@...wei.com>
>>
>> Now we allocate ifs if i_blocks_per_folio is larger than one when
>> writing back dirty folios in iomap_writepage_map(), so we don't attach
>> an ifs after buffer write to an entire folio until it starts writing
>> back, if we partial truncate that folio, iomap_invalidate_folio() can't
>> clear counterpart block's dirty bit as expected. Fix this by advance the
>> ifs allocation to __iomap_write_begin().
>>
>> Signed-off-by: Zhang Yi <yi.zhang@...wei.com>
>> ---
>>   fs/iomap/buffered-io.c | 17 ++++++++++++-----
>>   1 file changed, 12 insertions(+), 5 deletions(-)
>>
>> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
>> index 763deabe8331..79031b7517e5 100644
>> --- a/fs/iomap/buffered-io.c
>> +++ b/fs/iomap/buffered-io.c
>> @@ -699,6 +699,12 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
>>       size_t from = offset_in_folio(folio, pos), to = from + len;
>>       size_t poff, plen;
>>   +    if (nr_blocks > 1) {
>> +        ifs = ifs_alloc(iter->inode, folio, iter->flags);
>> +        if ((iter->flags & IOMAP_NOWAIT) && !ifs)
>> +            return -EAGAIN;
>> +    }
>> +
>>       /*
>>        * If the write or zeroing completely overlaps the current folio, then
>>        * entire folio will be dirtied so there is no need for
> 
> The comments upper need change too.

Will update as well, thanks for pointing this out.

Thanks,
Yi.

> 
> 
>> @@ -710,10 +716,6 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
>>           pos + len >= folio_pos(folio) + folio_size(folio))
>>           return 0;
>>   -    ifs = ifs_alloc(iter->inode, folio, iter->flags);
>> -    if ((iter->flags & IOMAP_NOWAIT) && !ifs && nr_blocks > 1)
>> -        return -EAGAIN;
>> -
>>       if (folio_test_uptodate(folio))
>>           return 0;
>>   @@ -1928,7 +1930,12 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
>>       WARN_ON_ONCE(end_pos <= pos);
>>         if (i_blocks_per_folio(inode, folio) > 1) {
>> -        if (!ifs) {
>> +        /*
>> +         * This should not happen since we always allocate ifs in
>> +         * iomap_folio_mkwrite_iter() and there is more than one
>> +         * blocks per folio in __iomap_write_begin().
>> +         */
>> +        if (WARN_ON_ONCE(!ifs)) {
>>               ifs = ifs_alloc(inode, folio, 0);
>>               iomap_set_range_dirty(folio, 0, end_pos - pos);
>>           }


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ