lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMojXxiyYTRaQvJs@casper.infradead.org>
Date:   Wed, 16 Jun 2021 17:14:23 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Christoph Hellwig <hch@....de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>, Al Viro <viro@...iv.linux.org.uk>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/6] mm/writeback: Move __set_page_dirty() to core mm

On Tue, Jun 15, 2021 at 05:23:37PM +0100, Matthew Wilcox (Oracle) wrote:
> -/*
> - * Mark the page dirty, and set it dirty in the page cache, and mark the inode
> - * dirty.
> - *
> - * If warn is true, then emit a warning if the page is not uptodate and has
> - * not been truncated.
> - *
> - * The caller must hold lock_page_memcg().
> - */

Checking against my folio tree, I found a bit of extra documentation
that I had added and didn't make it into this submission.  Let me know
if it's useful and if so I can submit it as a fixup patch:

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 73b937955cc1..2072787d9b44 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2466,7 +2466,11 @@ void account_page_cleaned(struct page *page, struct addre
ss_space *mapping,
  * If warn is true, then emit a warning if the page is not uptodate and has
  * not been truncated.
  *
- * The caller must hold lock_page_memcg().
+ * The caller must hold lock_page_memcg().  Most callers have the page
+ * locked.  A few have the page blocked from truncation through other
+ * means (eg zap_page_range() has it mapped and is holding the page table
+ * lock).  This can also be called from mark_buffer_dirty(), which I
+ * cannot prove is always protected against truncate.
  */
 void __set_page_dirty(struct page *page, struct address_space *mapping,
                             int warn)


... it's a bit "notes to self", so perhaps someone can clean it up.
In particular, someone who knows the buffer code better than I do can
prove that mark_buffer_dirty() is always protected against truncate.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ