lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 2 May 2023 06:30:21 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Lorenzo Stoakes <lstoakes@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jason Gunthorpe <jgg@...pe.ca>, Jens Axboe <axboe@...nel.dk>,
        Matthew Wilcox <willy@...radead.org>,
        Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>,
        Leon Romanovsky <leon@...nel.org>,
        Christian Benvenuti <benve@...co.com>,
        Nelson Escobar <neescoba@...co.com>,
        Bernard Metzler <bmt@...ich.ibm.com>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Jiri Olsa <jolsa@...nel.org>,
        Namhyung Kim <namhyung@...nel.org>,
        Ian Rogers <irogers@...gle.com>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Bjorn Topel <bjorn@...nel.org>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        "David S . Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Christian Brauner <brauner@...nel.org>,
        Richard Cochran <richardcochran@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        linux-fsdevel@...r.kernel.org, linux-perf-users@...r.kernel.org,
        netdev@...r.kernel.org, bpf@...r.kernel.org,
        Oleg Nesterov <oleg@...hat.com>,
        Jason Gunthorpe <jgg@...dia.com>,
        John Hubbard <jhubbard@...dia.com>, Jan Kara <jack@...e.cz>,
        "Kirill A . Shutemov" <kirill@...temov.name>,
        Pavel Begunkov <asml.silence@...il.com>,
        Mika Penttila <mpenttil@...hat.com>,
        David Hildenbrand <david@...hat.com>,
        Dave Chinner <david@...morbit.com>,
        Theodore Ts'o <tytso@....edu>, Peter Xu <peterx@...hat.com>
Subject: Re: [PATCH v6 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing
 to file-backed mappings

On Tue, May 02, 2023 at 02:08:10PM +0200, Peter Zijlstra wrote:
> On Tue, May 02, 2023 at 12:25:54PM +0100, Lorenzo Stoakes wrote:
> > On Tue, May 02, 2023 at 01:13:34PM +0200, Peter Zijlstra wrote:
> > > On Tue, May 02, 2023 at 12:11:49AM +0100, Lorenzo Stoakes wrote:
> > > > @@ -95,6 +96,77 @@ static inline struct folio *try_get_folio(struct page *page, int refs)
> > > >  	return folio;
> > > >  }
> > > >
> > > > +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE
> > > > +static bool stabilise_mapping_rcu(struct folio *folio)
> > > > +{
> > > > +	struct address_space *mapping = READ_ONCE(folio->mapping);
> > > > +
> > > > +	rcu_read_lock();
> > > > +
> > > > +	return mapping == READ_ONCE(folio->mapping);
> > >
> > > This doesn't make sense; why bother reading the same thing twice?
> > 
> > The intent is to see whether the folio->mapping has been truncated from
> > underneath us, as per the futex code that Kirill referred to which does
> > something similar [1].
> 
> Yeah, but per that 3rd load you got nothing here. Also that futex code
> did the early load to deal with the !mapping case, but you're not doing
> that.
> 
> > > Who cares if the thing changes from before; what you care about is that
> > > the value you see has stable storage, this doesn't help with that.
> > >
> > > > +}
> > > > +
> > > > +static void unlock_rcu(void)
> > > > +{
> > > > +	rcu_read_unlock();
> > > > +}
> > > > +#else
> > > > +static bool stabilise_mapping_rcu(struct folio *)
> > > > +{
> > > > +	return true;
> > > > +}
> > > > +
> > > > +static void unlock_rcu(void)
> > > > +{
> > > > +}
> > > > +#endif
> > >
> > > Anyway, this all can go away. RCU can't progress while you have
> > > interrupts disabled anyway.
> > 
> > There seems to be other code in the kernel that assumes that this is not
> > the case,
> 
> Yeah, so Paul went back on forth on that a bit. It used to be true in
> the good old days when everything was simple. Then Paul made things
> complicated by separating out sched-RCU bh-RCU and 'regular' RCU
> flavours.

Almost.  ;-)

The way I made things complicated was instead by creating preemptible RCU
for the real-time effort.  The original non-preemptible RCU was still
required for a number of use cases (for example, waiting for hardware
interrupt handlers), so it had to stay.  Separately, network-based DoS
attacks necessitated adding RCU bh.

> At that point disabling IRQs would only (officially) inhibit sched and
> bh RCU flavours, but not the regular RCU.

Quite right.

> But then some years ago Linus convinced Paul that having all these
> separate RCU flavours with separate QS rules was a big pain in the
> backside and Paul munged them all together again.

What happened was that someone used one flavor of RCU reader and a
different flavor of RCU updater, creating an exploitable bug.  

http://www2.rdrop.com/~paulmck/RCU/cve.2019.01.23e.pdf
https://www.youtube.com/watch?v=hZX1aokdNiY

And Linus asked that this bug be ruled out, so...

> So now, anything that inhibits any of the RCU flavours inhibits them
> all. So disabling IRQs is sufficient.

...for v4.20 and later, exactly.

							Thanx, Paul

> > i.e. the futex code, though not sure if that's being run with
> > IRQs disabled...
> 
> That futex code runs in preemptible context, per the lock_page() that
> can sleep etc.. :-)
> 
> > > > +/*
> > > > + * Used in the GUP-fast path to determine whether a FOLL_PIN | FOLL_LONGTERM |
> > > > + * FOLL_WRITE pin is permitted for a specific folio.
> > > > + *
> > > > + * This assumes the folio is stable and pinned.
> > > > + *
> > > > + * Writing to pinned file-backed dirty tracked folios is inherently problematic
> > > > + * (see comment describing the writeable_file_mapping_allowed() function). We
> > > > + * therefore try to avoid the most egregious case of a long-term mapping doing
> > > > + * so.
> > > > + *
> > > > + * This function cannot be as thorough as that one as the VMA is not available
> > > > + * in the fast path, so instead we whitelist known good cases.
> > > > + *
> > > > + * The folio is stable, but the mapping might not be. When truncating for
> > > > + * instance, a zap is performed which triggers TLB shootdown. IRQs are disabled
> > > > + * so we are safe from an IPI, but some architectures use an RCU lock for this
> > > > + * operation, so we acquire an RCU lock to ensure the mapping is stable.
> > > > + */
> > > > +static bool folio_longterm_write_pin_allowed(struct folio *folio)
> > > > +{
> > > > +	bool ret;
> > > > +
> > > > +	/* hugetlb mappings do not require dirty tracking. */
> > > > +	if (folio_test_hugetlb(folio))
> > > > +		return true;
> > > > +
> > >
> > > This:
> > >
> > > > +	if (stabilise_mapping_rcu(folio)) {
> > > > +		struct address_space *mapping = folio_mapping(folio);
> > >
> > > And this is 3rd read of folio->mapping, just for giggles?
> > 
> > I like to giggle :)
> > 
> > Actually this is to handle the various cases in which the mapping might not
> > be what we want (i.e. have PAGE_MAPPING_FLAGS set) which doesn't appear to
> > have a helper exposed for a check. Given previous review about duplication
> > I felt best to reuse this even though it does access again... yes I felt
> > weird about doing that.
> 
> Right, I had a peek inside folio_mapping(), but the point is that this
> 3rd load might see yet *another* value of mapping from the prior two
> loads, rendering them somewhat worthless.
> 
> > > > +
> > > > +		/*
> > > > +		 * Neither anonymous nor shmem-backed folios require
> > > > +		 * dirty tracking.
> > > > +		 */
> > > > +		ret = folio_test_anon(folio) ||
> > > > +			(mapping && shmem_mapping(mapping));
> > > > +	} else {
> > > > +		/* If the mapping is unstable, fallback to the slow path. */
> > > > +		ret = false;
> > > > +	}
> > > > +
> > > > +	unlock_rcu();
> > > > +
> > > > +	return ret;
> > >
> > > then becomes:
> > >
> > >
> > > 	if (folio_test_anon(folio))
> > > 		return true;
> > 
> > This relies on the mapping so belongs below the lockdep assert imo.
> 
> Oh, right you are.
> 
> > >
> > > 	/*
> > > 	 * Having IRQs disabled (as per GUP-fast) also inhibits RCU
> > > 	 * grace periods from making progress, IOW. they imply
> > > 	 * rcu_read_lock().
> > > 	 */
> > > 	lockdep_assert_irqs_disabled();
> > >
> > > 	/*
> > > 	 * Inodes and thus address_space are RCU freed and thus safe to
> > > 	 * access at this point.
> > > 	 */
> > > 	mapping = folio_mapping(folio);
> > > 	if (mapping && shmem_mapping(mapping))
> > > 		return true;
> > >
> > > 	return false;
> > >
> > > > +}
> > 
> > I'm more than happy to do this (I'd rather drop the RCU bits if possible)
> > but need to be sure it's safe.
> 
> GUP-fast as a whole relies on it :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ