lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 20 Sep 2021 15:35:08 -0700
From:   Yang Shi <shy828301@...il.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Hugh Dickins <hughd@...gle.com>, cfijalkovich@...gle.com,
        song@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
        Hao Sun <sunhao.th@...il.com>, Linux MM <linux-mm@...ck.org>,
        Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Song Liu <songliubraving@...com>
Subject: Re: [PATCH] fs: buffer: check huge page size instead of single page
 for invalidatepage

On Mon, Sep 20, 2021 at 2:50 PM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Mon, Sep 20, 2021 at 02:23:41PM -0700, Yang Shi wrote:
> > On Sun, Sep 19, 2021 at 7:41 AM Matthew Wilcox <willy@...radead.org> wrote:
> > >
> > > On Fri, Sep 17, 2021 at 05:07:03PM -0700, Yang Shi wrote:
> > > > > The debugging showed the page passed to invalidatepage is a huge page
> > > > > and the length is the size of huge page instead of single page due to
> > > > > read only FS THP support.  But block_invalidatepage() would throw BUG if
> > > > > the size is greater than single page.
> > >
> > > Things have already gone wrong before we get to this point.  See
> > > do_dentry_open().  You aren't supposed to be able to get a writable file
> > > descriptor on a file which has had huge pages added to the page cache
> > > without the filesystem's knowledge.  That's the problem that needs to
> > > be fixed.
> >
> > I don't quite understand your point here. Do you mean do_dentry_open()
> > should fail for such cases instead of truncating the page cache?
>
> No, do_dentry_open() should have truncated the page cache when it was
> called and found that there were THPs in the cache.  Then khugepaged
> should see that someone has the file open for write and decline to create
> new THPs.  So it shouldn't be possible to get here with THPs in the cache.

AFAICT, it does so.

In do_dentry_open():
/*
         * XXX: Huge page cache doesn't support writing yet. Drop all page
         * cache for this file before processing writes.
         */
        if (f->f_mode & FMODE_WRITE) {
                /*
                 * Paired with smp_mb() in collapse_file() to ensure nr_thps
                 * is up to date and the update to i_writecount by
                 * get_write_access() is visible. Ensures subsequent insertion
                 * of THPs into the page cache will fail.
                 */
                smp_mb();
                if (filemap_nr_thps(inode->i_mapping))
                        truncate_pagecache(inode, 0);
        }


In khugepaged:
filemap_nr_thps_inc(mapping);
                /*
                 * Paired with smp_mb() in do_dentry_open() to ensure
                 * i_writecount is up to date and the update to nr_thps is
                 * visible. Ensures the page cache will be truncated if the
                 * file is opened writable.
                 */
                smp_mb();
                if (inode_is_open_for_write(mapping->host)) {
                        result = SCAN_FAIL;
                        __mod_lruvec_page_state(new_page, NR_FILE_THPS, -nr);
                        filemap_nr_thps_dec(mapping);
                        goto xa_locked;
                }

But I'm not quite sure if there is any race condition.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ