lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190212235743.GB95899@google.com>
Date:   Tue, 12 Feb 2019 16:57:43 -0700
From:   Yu Zhao <yuzhao@...gle.com>
To:     William Kucharski <william.kucharski@...cle.com>
Cc:     Matthew Wilcox <willy@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Amir Goldstein <amir73il@...il.com>,
        Dave Chinner <david@...morbit.com>,
        "Darrick J . Wong" <darrick.wong@...cle.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Souptick Joarder <jrdr.linux@...il.com>,
        Hugh Dickins <hughd@...gle.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/shmem: make find_get_pages_range() work for huge page

On Thu, Jan 10, 2019 at 04:43:57AM -0700, William Kucharski wrote:
> 
> 
> > On Jan 9, 2019, at 8:08 PM, Yu Zhao <yuzhao@...gle.com> wrote:
> > 
> > find_get_pages_range() and find_get_pages_range_tag() already
> > correctly increment reference count on head when seeing compound
> > page, but they may still use page index from tail. Page index
> > from tail is always zero, so these functions don't work on huge
> > shmem. This hasn't been a problem because, AFAIK, nobody calls
> > these functions on (huge) shmem. Fix them anyway just in case.
> > 
> > Signed-off-by: Yu Zhao <yuzhao@...gle.com>
> > ---
> > mm/filemap.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 81adec8ee02c..cf5fd773314a 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -1704,7 +1704,7 @@ unsigned find_get_pages_range(struct address_space *mapping, pgoff_t *start,
> > 
> > 		pages[ret] = page;
> > 		if (++ret == nr_pages) {
> > -			*start = page->index + 1;
> > +			*start = xas.xa_index + 1;
> > 			goto out;
> > 		}
> > 		continue;
> > @@ -1850,7 +1850,7 @@ unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
> > 
> > 		pages[ret] = page;
> > 		if (++ret == nr_pages) {
> > -			*index = page->index + 1;
> > +			*index = xas.xa_index + 1;
> > 			goto out;
> > 		}
> > 		continue;
> > -- 
> 
> While this works, it seems like this would be more readable for future maintainers were it to
> instead squirrel away the value for *start/*index when ret was zero on the first iteration through
> the loop.

I'm not sure how this could be more readable, and it sounds
independent from the problem the patch fixes.

> Though xa_index is designed to hold the first index of the entry, it seems inappropriate to have
> these routines deference elements of xas directly; I guess it depends on how opaque we want to keep
> xas and struct xa_state.

It seems to me it's pefectly fine to use fields of xas directly,
and it's being done this way throughout the file.

> Does anyone else have a feeling one way or the other? I could be persuaded either way.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ