[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20201102235831.GA52235@lx-t490>
Date: Tue, 3 Nov 2020 00:58:31 +0100
From: "Ahmed S. Darwish" <a.darwish@...utronix.de>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: linux-kernel@...r.kernel.org, Peter Xu <peterx@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Christoph Hellwig <hch@....de>,
Hugh Dickins <hughd@...gle.com>, Jan Kara <jack@...e.cz>,
Jann Horn <jannh@...gle.com>,
John Hubbard <jhubbard@...dia.com>,
Kirill Shutemov <kirill@...temov.name>,
Kirill Tkhai <ktkhai@...tuozzo.com>,
Leon Romanovsky <leonro@...dia.com>,
Linux-MM <linux-mm@...ck.org>, Michal Hocko <mhocko@...e.com>,
Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH v2 2/2] mm: prevent gup_fast from racing with COW during
fork
On Fri, Oct 30, 2020 at 11:46:21AM -0300, Jason Gunthorpe wrote:
...
> diff --git a/mm/memory.c b/mm/memory.c
> index c48f8df6e50268..294c2c3c4fe00d 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1171,6 +1171,12 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
> mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
> 0, src_vma, src_mm, addr, end);
> mmu_notifier_invalidate_range_start(&range);
> + /*
> + * The read side doesn't spin, it goes to the mmap_lock, so the
> + * raw version is used to avoid disabling preemption here
> + */
> + mmap_assert_write_locked(src_mm);
> + raw_write_seqcount_t_begin(&src_mm->write_protect_seq);
> }
>
Please, s/raw_write_seqcount_t_begin()/raw_write_seqcount_begin()/g. For
plain seqcount_t, it's the same, while still respecting the seqlock.h
API boundaries.
Let's make the comment also a bit more clear (IMHO, "lockdep" needs to
be mentioned somewhere):
/*
* Disabling preemption is not needed for the write side, as
* the read side doesn't spin, but goes to the mmap_lock.
*
* Use the raw variant of the seqcount_t write API to avoid
* lockdep complaining about preemptibility.
*/
mmap_assert_write_locked(src_mm);
raw_write_seqcount_t_begin(&src_mm->write_protect_seq);
> ret = 0;
> @@ -1187,8 +1193,10 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
> }
> } while (dst_pgd++, src_pgd++, addr = next, addr != end);
>
> - if (is_cow)
> + if (is_cow) {
> + raw_write_seqcount_t_end(&src_mm->write_protect_seq);
ditto.
s/raw_write_seqcount_t_end()/raw_write_seqcount_end()/g
> mmu_notifier_invalidate_range_end(&range);
> + }
> return ret;
> }
>
Thanks,
--
Ahmed S. Darwish
Linutronix GmbH
Powered by blists - more mailing lists