[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230503110145.74q5sm5psv7o7nrd@quack3>
Date: Wed, 3 May 2023 13:01:45 +0200
From: Jan Kara <jack@...e.cz>
To: Lorenzo Stoakes <lstoakes@...il.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Jason Gunthorpe <jgg@...pe.ca>, Jens Axboe <axboe@...nel.dk>,
Matthew Wilcox <willy@...radead.org>,
Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>,
Leon Romanovsky <leon@...nel.org>,
Christian Benvenuti <benve@...co.com>,
Nelson Escobar <neescoba@...co.com>,
Bernard Metzler <bmt@...ich.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...nel.org>,
Namhyung Kim <namhyung@...nel.org>,
Ian Rogers <irogers@...gle.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Bjorn Topel <bjorn@...nel.org>,
Magnus Karlsson <magnus.karlsson@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Christian Brauner <brauner@...nel.org>,
Richard Cochran <richardcochran@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
linux-fsdevel@...r.kernel.org, linux-perf-users@...r.kernel.org,
netdev@...r.kernel.org, bpf@...r.kernel.org,
Oleg Nesterov <oleg@...hat.com>,
Jason Gunthorpe <jgg@...dia.com>,
John Hubbard <jhubbard@...dia.com>, Jan Kara <jack@...e.cz>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Pavel Begunkov <asml.silence@...il.com>,
Mika Penttila <mpenttil@...hat.com>,
David Hildenbrand <david@...hat.com>,
Dave Chinner <david@...morbit.com>,
Theodore Ts'o <tytso@....edu>, Peter Xu <peterx@...hat.com>,
Matthew Rosato <mjrosato@...ux.ibm.com>,
"Paul E . McKenney" <paulmck@...nel.org>,
Christian Borntraeger <borntraeger@...ux.ibm.com>
Subject: Re: [PATCH v8 3/3] mm/gup: disallow FOLL_LONGTERM GUP-fast writing
to file-backed mappings
On Tue 02-05-23 23:51:35, Lorenzo Stoakes wrote:
> Writing to file-backed dirty-tracked mappings via GUP is inherently broken
> as we cannot rule out folios being cleaned and then a GUP user writing to
> them again and possibly marking them dirty unexpectedly.
>
> This is especially egregious for long-term mappings (as indicated by the
> use of the FOLL_LONGTERM flag), so we disallow this case in GUP-fast as
> we have already done in the slow path.
>
> We have access to less information in the fast path as we cannot examine
> the VMA containing the mapping, however we can determine whether the folio
> is anonymous or belonging to a whitelisted filesystem - specifically
> hugetlb and shmem mappings.
>
> We take special care to ensure that both the folio and mapping are safe to
> access when performing these checks and document folio_fast_pin_allowed()
> accordingly.
>
> It's important to note that there are no APIs allowing users to specify
> FOLL_FAST_ONLY for a PUP-fast let alone with FOLL_LONGTERM, so we can
> always rely on the fact that if we fail to pin on the fast path, the code
> will fall back to the slow path which can perform the more thorough check.
>
> Suggested-by: David Hildenbrand <david@...hat.com>
> Suggested-by: Kirill A . Shutemov <kirill@...temov.name>
> Suggested-by: Peter Zijlstra <peterz@...radead.org>
> Signed-off-by: Lorenzo Stoakes <lstoakes@...il.com>
The patch looks good to me now. Feel free to add:
Reviewed-by: Jan Kara <jack@...e.cz>
Honza
> ---
> mm/gup.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 102 insertions(+)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 0ea9ebec9547..1ab369b5d889 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -18,6 +18,7 @@
> #include <linux/migrate.h>
> #include <linux/mm_inline.h>
> #include <linux/sched/mm.h>
> +#include <linux/shmem_fs.h>
>
> #include <asm/mmu_context.h>
> #include <asm/tlbflush.h>
> @@ -95,6 +96,83 @@ static inline struct folio *try_get_folio(struct page *page, int refs)
> return folio;
> }
>
> +/*
> + * Used in the GUP-fast path to determine whether a pin is permitted for a
> + * specific folio.
> + *
> + * This call assumes the caller has pinned the folio, that the lowest page table
> + * level still points to this folio, and that interrupts have been disabled.
> + *
> + * Writing to pinned file-backed dirty tracked folios is inherently problematic
> + * (see comment describing the writable_file_mapping_allowed() function). We
> + * therefore try to avoid the most egregious case of a long-term mapping doing
> + * so.
> + *
> + * This function cannot be as thorough as that one as the VMA is not available
> + * in the fast path, so instead we whitelist known good cases and if in doubt,
> + * fall back to the slow path.
> + */
> +static bool folio_fast_pin_allowed(struct folio *folio, unsigned int flags)
> +{
> + struct address_space *mapping;
> + unsigned long mapping_flags;
> +
> + /*
> + * If we aren't pinning then no problematic write can occur. A long term
> + * pin is the most egregious case so this is the one we disallow.
> + */
> + if ((flags & (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) !=
> + (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE))
> + return true;
> +
> + /* The folio is pinned, so we can safely access folio fields. */
> +
> + /* Neither of these should be possible, but check to be sure. */
> + if (unlikely(folio_test_slab(folio) || folio_test_swapcache(folio)))
> + return false;
> +
> + /* hugetlb mappings do not require dirty-tracking. */
> + if (folio_test_hugetlb(folio))
> + return true;
> +
> + /*
> + * GUP-fast disables IRQs. When IRQS are disabled, RCU grace periods
> + * cannot proceed, which means no actions performed under RCU can
> + * proceed either.
> + *
> + * inodes and thus their mappings are freed under RCU, which means the
> + * mapping cannot be freed beneath us and thus we can safely dereference
> + * it.
> + */
> + lockdep_assert_irqs_disabled();
> +
> + /*
> + * However, there may be operations which _alter_ the mapping, so ensure
> + * we read it once and only once.
> + */
> + mapping = READ_ONCE(folio->mapping);
> +
> + /*
> + * The mapping may have been truncated, in any case we cannot determine
> + * if this mapping is safe - fall back to slow path to determine how to
> + * proceed.
> + */
> + if (!mapping)
> + return false;
> +
> + /* Anonymous folios are fine, other non-file backed cases are not. */
> + mapping_flags = (unsigned long)mapping & PAGE_MAPPING_FLAGS;
> + if (mapping_flags)
> + return mapping_flags == PAGE_MAPPING_ANON;
> +
> + /*
> + * At this point, we know the mapping is non-null and points to an
> + * address_space object. The only remaining whitelisted file system is
> + * shmem.
> + */
> + return shmem_mapping(mapping);
> +}
> +
> /**
> * try_grab_folio() - Attempt to get or pin a folio.
> * @page: pointer to page to be grabbed
> @@ -2464,6 +2542,11 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
> goto pte_unmap;
> }
>
> + if (!folio_fast_pin_allowed(folio, flags)) {
> + gup_put_folio(folio, 1, flags);
> + goto pte_unmap;
> + }
> +
> if (!pte_write(pte) && gup_must_unshare(NULL, flags, page)) {
> gup_put_folio(folio, 1, flags);
> goto pte_unmap;
> @@ -2656,6 +2739,11 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
> return 0;
> }
>
> + if (!folio_fast_pin_allowed(folio, flags)) {
> + gup_put_folio(folio, refs, flags);
> + return 0;
> + }
> +
> if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) {
> gup_put_folio(folio, refs, flags);
> return 0;
> @@ -2722,6 +2810,10 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
> return 0;
> }
>
> + if (!folio_fast_pin_allowed(folio, flags)) {
> + gup_put_folio(folio, refs, flags);
> + return 0;
> + }
> if (!pmd_write(orig) && gup_must_unshare(NULL, flags, &folio->page)) {
> gup_put_folio(folio, refs, flags);
> return 0;
> @@ -2762,6 +2854,11 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
> return 0;
> }
>
> + if (!folio_fast_pin_allowed(folio, flags)) {
> + gup_put_folio(folio, refs, flags);
> + return 0;
> + }
> +
> if (!pud_write(orig) && gup_must_unshare(NULL, flags, &folio->page)) {
> gup_put_folio(folio, refs, flags);
> return 0;
> @@ -2797,6 +2894,11 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
> return 0;
> }
>
> + if (!folio_fast_pin_allowed(folio, flags)) {
> + gup_put_folio(folio, refs, flags);
> + return 0;
> + }
> +
> *nr += refs;
> folio_set_referenced(folio);
> return 1;
> --
> 2.40.1
>
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists