[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20150105184914.GA8012@gmail.com>
Date: Mon, 5 Jan 2015 13:49:15 -0500
From: Jerome Glisse <j.glisse@...il.com>
To: Haggai Eran <haggaie@...lanox.com>
Cc: Mark Hairgrove <mhairgrove@...dia.com>,
Dave Airlie <airlied@...hat.com>,
Arvind Gopalakrishnan <arvindg@...dia.com>,
"joro@...tes.org" <joro@...tes.org>,
Greg Stoner <Greg.Stoner@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
Cameron Buschardt <cabuschardt@...dia.com>,
Rik van Riel <riel@...hat.com>,
Paul Blinzer <Paul.Blinzer@....com>,
Lucien Dunning <ldunning@...dia.com>,
Johannes Weiner <jweiner@...hat.com>,
Michael Mantor <Michael.Mantor@....com>,
Laurent Morichetti <Laurent.Morichetti@....com>,
Larry Woodman <lwoodman@...hat.com>,
John Hubbard <jhubbard@...dia.com>,
Brendan Conoboy <blc@...hat.com>,
John Bridgman <John.Bridgman@....com>,
Subhash Gutti <sgutti@...dia.com>,
Roland Dreier <roland@...estorage.com>,
Duncan Poole <dpoole@...dia.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Alexander Deucher <Alexander.Deucher@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Oded Gabbay <Oded.Gabbay@....com>,
Sherry Cheung <SCheung@...dia.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Shachar Raindel <raindel@...lanox.com>,
Liran Liss <liranl@...lanox.com>,
Jérôme Glisse <jglisse@...hat.com>,
Ben Sander <ben.sander@....com>,
Joe Donohue <jdonohue@...hat.com>,
Mel Gorman <mgorman@...e.de>, "H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 2/7] mmu_notifier: keep track of active invalidation
ranges v2
On Sun, Dec 28, 2014 at 08:46:42AM +0000, Haggai Eran wrote:
>
> On Dec 26, 2014 9:20 AM, Jerome Glisse <j.glisse@...il.com> wrote:
> >
> > On Thu, Dec 25, 2014 at 10:29:44AM +0200, Haggai Eran wrote:
> > > On 22/12/2014 18:48, j.glisse@...il.com wrote:
> > > > static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > > > - unsigned long start,
> > > > - unsigned long end,
> > > > - enum mmu_event event)
> > > > + struct mmu_notifier_range *range)
> > > > {
> > > > + /*
> > > > + * Initialize list no matter what in case a mmu_notifier register after
> > > > + * a range_start but before matching range_end.
> > > > + */
> > > > + INIT_LIST_HEAD(&range->list);
> > >
> > > I don't see how can an mmu_notifier register after a range_start but
> > > before a matching range_end. The mmu_notifier registration locks all mm
> > > locks, and that should prevent any invalidation from running, right?
> >
> > File invalidation (like truncation) can lead to this case.
>
> I thought that the fact that mm_take_all_locks locked the i_mmap_mutex of
> every file would prevent this from happening, because the notifier is added
> when the mutex is locked, and the truncate operation also locks it. Am I
> missing something?
No you right again, i was convince in my mind that mmu_notifier register was
only taking the mmap semaphore in write mode for some reasons while it is
in fact also calling mm_take_all_locks(). So yes this protect registration
from all concurrent invalidation.
>
> >
> > >
> > > > if (mm_has_notifiers(mm))
> > > > - __mmu_notifier_invalidate_range_start(mm, start, end, event);
> > > > + __mmu_notifier_invalidate_range_start(mm, range);
> > > > }
> > >
> > > ...
> > >
> > > > void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > > > - unsigned long start,
> > > > - unsigned long end,
> > > > - enum mmu_event event)
> > > > + struct mmu_notifier_range *range)
> > > >
> > > > {
> > > > struct mmu_notifier *mn;
> > > > @@ -185,21 +183,36 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > > > id = srcu_read_lock(&srcu);
> > > > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> > > > if (mn->ops->invalidate_range_start)
> > > > - mn->ops->invalidate_range_start(mn, mm, start,
> > > > - end, event);
> > > > + mn->ops->invalidate_range_start(mn, mm, range);
> > > > }
> > > > srcu_read_unlock(&srcu, id);
> > > > +
> > > > + /*
> > > > + * This must happen after the callback so that subsystem can block on
> > > > + * new invalidation range to synchronize itself.
> > > > + */
> > > > + spin_lock(&mm->mmu_notifier_mm->lock);
> > > > + list_add_tail(&range->list, &mm->mmu_notifier_mm->ranges);
> > > > + mm->mmu_notifier_mm->nranges++;
> > > > + spin_unlock(&mm->mmu_notifier_mm->lock);
> > > > }
> > > > EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);
> > >
> > > Don't you have a race here because you add the range struct after the
> > > callback?
> > >
> > > -------------------------------------------------------------------------
> > > Thread A | Thread B
> > > -------------------------------------------------------------------------
> > > call mmu notifier callback |
> > > clear SPTE |
> > > | device page fault
> > > | mmu_notifier_range_is_valid returns true
> > > | install new SPTE
> > > add event struct to list |
> > > mm clears/modifies the PTE |
> > > -------------------------------------------------------------------------
> > >
> > > So we are left with different entries in the host page table and the
> > > secondary page table.
> > >
> > > I would think you'd want the event struct to be added to the list before
> > > the callback is run.
> > >
> >
> > Yes you right, but the comment i left trigger memory that i did that on
> > purpose a one point probably with a different synch mecanism inside hmm.
> > I will try to medidate a bit see if i can bring back memory why i did it
> > that way in respect to previous design.
> >
> > In all case i will respin with that order modified. Can i add you review
> > by after doing so ?
>
> Sure, go ahead.
>
> Regards,
> Haggai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists