[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190820133418.GG29246@ziepe.ca>
Date: Tue, 20 Aug 2019 10:34:18 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: Daniel Vetter <daniel.vetter@...ll.ch>
Cc: LKML <linux-kernel@...r.kernel.org>, Linux MM <linux-mm@...ck.org>,
DRI Development <dri-devel@...ts.freedesktop.org>,
Intel Graphics Development <intel-gfx@...ts.freedesktop.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
David Rientjes <rientjes@...gle.com>,
Christian König <christian.koenig@....com>,
Jérôme Glisse <jglisse@...hat.com>,
Daniel Vetter <daniel.vetter@...el.com>
Subject: Re: [PATCH 4/4] mm, notifier: Catch sleeping/blocking for !blockable
On Tue, Aug 20, 2019 at 10:19:02AM +0200, Daniel Vetter wrote:
> We need to make sure implementations don't cheat and don't have a
> possible schedule/blocking point deeply burried where review can't
> catch it.
>
> I'm not sure whether this is the best way to make sure all the
> might_sleep() callsites trigger, and it's a bit ugly in the code flow.
> But it gets the job done.
>
> Inspired by an i915 patch series which did exactly that, because the
> rules haven't been entirely clear to us.
>
> v2: Use the shiny new non_block_start/end annotations instead of
> abusing preempt_disable/enable.
>
> v3: Rebase on top of Glisse's arg rework.
>
> v4: Rebase on top of more Glisse rework.
>
> Cc: Jason Gunthorpe <jgg@...pe.ca>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: "Christian König" <christian.koenig@....com>
> Cc: Daniel Vetter <daniel.vetter@...ll.ch>
> Cc: "Jérôme Glisse" <jglisse@...hat.com>
> Cc: linux-mm@...ck.org
> Reviewed-by: Christian König <christian.koenig@....com>
> Reviewed-by: Jérôme Glisse <jglisse@...hat.com>
> Signed-off-by: Daniel Vetter <daniel.vetter@...el.com>
> mm/mmu_notifier.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 538d3bb87f9b..856636d06ee0 100644
> +++ b/mm/mmu_notifier.c
> @@ -181,7 +181,13 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
> id = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_range_start) {
> - int _ret = mn->ops->invalidate_range_start(mn, range);
> + int _ret;
> +
> + if (!mmu_notifier_range_blockable(range))
> + non_block_start();
> + _ret = mn->ops->invalidate_range_start(mn, range);
> + if (!mmu_notifier_range_blockable(range))
> + non_block_end();
If someone Acks all the sched changes then I can pick this for
hmm.git, but I still think the existing pre-emption debugging is fine
for this use case.
Also, same comment as for the lockdep map, this needs to apply to the
non-blocking range_end also.
Anyhow, since this series has conflicts with hmm.git it would be best
to flow through the whole thing through that tree. If there are no
remarks on the first two patches I'll grab them in a few days.
Regards,
Jason
Powered by blists - more mailing lists