lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <acb76cdb-a54e-48e0-ba18-a2272d84f0ab@gmail.com>
Date: Mon, 10 Jun 2024 14:56:09 +0100
From: Usama Arif <usamaarif642@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, david@...hat.com,
 ying.huang@...el.com, hughd@...gle.com, yosryahmed@...gle.com,
 nphamcs@...il.com, chengming.zhou@...ux.dev, linux-mm@...ck.org,
 linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH v3 1/2] mm: store zero pages to be swapped out in a bitmap


On 10/06/2024 14:07, Matthew Wilcox wrote:
> On Mon, Jun 10, 2024 at 01:15:59PM +0100, Usama Arif wrote:
>> +	if (is_folio_zero_filled(folio)) {
>> +		swap_zeromap_folio_set(folio);
>> +		folio_start_writeback(folio);
>> +		folio_unlock(folio);
>> +		folio_end_writeback(folio);
> What's the point?  As far as I can see, the only thing this is going to
> do is spend a lot of time messing with various counters only to end up
> with incrementing NR_WRITTEN, which is wrong because you didn't actually
> write it.
>

I am guessing what you are suggesting is just do this?

     if (is_folio_zero_filled(folio)) {
         swap_zeromap_folio_set(folio);
         folio_unlock(folio);
         return 0;
     }

This is what I did initially while developing this, but when I started 
looking into why zswap_store does  folio_start_writeback, folio_unlock, 
folio_end_writeback I found:

https://elixir.bootlin.com/linux/v6.9.3/source/Documentation/filesystems/locking.rst#L336

"If no I/O is submitted, the filesystem must run end_page_writeback() 
against the page before returning from writepage."

and

https://elixir.bootlin.com/linux/v6.9.3/source/Documentation/filesystems/locking.rst#L349

"Note, failure to run either redirty_page_for_writepage() or the 
combination of
set_page_writeback()/end_page_writeback() on a page submitted to writepage
will leave the page itself marked clean but it will be tagged as dirty 
in the
radix tree.  This incoherency can lead to all sorts of hard-to-debug 
problems
in the filesystem like having dirty inodes at umount and losing written 
data.
"

If we have zswap enabled, the zero filled pages (infact any page that is 
compressed), will be saved in zswap_entry and NR_WRITTEN will be wrongly 
incremented. So the behaviour for NR_WRITTEN does not change in this 
patch when encountering zero pages with zswap enabled (even if its wrong).

This patch just extracts the optimization out from zswap [1] to swap, so 
that it always runs.

In order to fix NR_WRITTEN accounting for zswap, this patch series and 
any other cases where no I/O is submitted but end_page_writeback is 
called before returning to writepage, maybe we could add an argument to 
__folio_end_writeback like below? There are a lot of calls to 
folio_end_writeback and the behaviour of zeropage with regards to 
NR_WRITTEN doesnt change with or without this patchseries with zswap 
enabled, so maybe we could keep this independent of this series? I would 
be happy to submit this as separate patch if it makes sense.


diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 81b2e4128d26..415037f511c2 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -3042,7 +3042,7 @@ static void wb_inode_writeback_end(struct 
bdi_writeback *wb)
         spin_unlock_irqrestore(&wb->work_lock, flags);
  }

-bool __folio_end_writeback(struct folio *folio)
+bool __folio_end_writeback(struct folio *folio, bool nr_written_increment)
  {
         long nr = folio_nr_pages(folio);
         struct address_space *mapping = folio_mapping(folio);
@@ -3078,7 +3078,8 @@ bool __folio_end_writeback(struct folio *folio)

         lruvec_stat_mod_folio(folio, NR_WRITEBACK, -nr);
         zone_stat_mod_folio(folio, NR_ZONE_WRITE_PENDING, -nr);
-       node_stat_mod_folio(folio, NR_WRITTEN, nr);
+       if (nr_written_increment)
+               node_stat_mod_folio(folio, NR_WRITTEN, nr);
         folio_memcg_unlock(folio);

         return ret;


[1] 
https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/






Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ