[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150611142313.GA26195@gmail.com>
Date: Thu, 11 Jun 2015 10:23:14 -0400
From: Jerome Glisse <j.glisse@...il.com>
To: Mark Hairgrove <mhairgrove@...dia.com>
Cc: "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"joro@...tes.org" <joro@...tes.org>, Mel Gorman <mgorman@...e.de>,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Johannes Weiner <jweiner@...hat.com>,
Larry Woodman <lwoodman@...hat.com>,
Rik van Riel <riel@...hat.com>,
Dave Airlie <airlied@...hat.com>,
Brendan Conoboy <blc@...hat.com>,
Joe Donohue <jdonohue@...hat.com>,
Duncan Poole <dpoole@...dia.com>,
Sherry Cheung <SCheung@...dia.com>,
Subhash Gutti <sgutti@...dia.com>,
John Hubbard <jhubbard@...dia.com>,
Lucien Dunning <ldunning@...dia.com>,
Cameron Buschardt <cabuschardt@...dia.com>,
Arvind Gopalakrishnan <arvindg@...dia.com>,
Haggai Eran <haggaie@...lanox.com>,
Shachar Raindel <raindel@...lanox.com>,
Liran Liss <liranl@...lanox.com>,
Roland Dreier <roland@...estorage.com>,
Ben Sander <ben.sander@....com>,
Greg Stoner <Greg.Stoner@....com>,
John Bridgman <John.Bridgman@....com>,
Michael Mantor <Michael.Mantor@....com>,
Paul Blinzer <Paul.Blinzer@....com>,
Laurent Morichetti <Laurent.Morichetti@....com>,
Alexander Deucher <Alexander.Deucher@....com>,
Oded Gabbay <Oded.Gabbay@....com>,
Jérôme Glisse <jglisse@...hat.com>,
Jatin Kumar <jakumar@...dia.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>
Subject: Re: [PATCH 05/36] HMM: introduce heterogeneous memory management v3.
On Wed, Jun 10, 2015 at 06:15:08PM -0700, Mark Hairgrove wrote:
[...]
> > There is no race here, the mirror struct will only be freed once as again
> > the list is a synchronization point. Whoever remove the mirror from the
> > list is responsible to drop the list reference.
> >
> > In the fixed code the only thing that will happen twice is the ->release()
> > callback. Even that can be work around to garanty it is call only once.
> >
> > Anyway i do not see anyrace here.
> >
>
> The mirror lifetime is fine. The problem I see is with the device lifetime
> on a multi-core system. Imagine this sequence:
>
> - On CPU1 the mm associated with the mirror is going down
> - On CPU2 the driver unregisters the mirror then the device
>
> When this happens, the last device mutex_unlock on CPU1 is the only thing
> preventing the free of the device in CPU2. That doesn't work, as described
> in this thread: https://lkml.org/lkml/2013/12/2/997
>
> Here's the full sequence again with mutex_unlock split apart. Hopefully
> this shows the device_unregister problem more clearly:
>
> CPU1 (mm release) CPU2 (driver)
> ---------------------- ----------------------
> hmm_notifier_release
> down_write(&hmm->rwsem);
> hlist_del_init(&mirror->mlist);
> up_write(&hmm->rwsem);
>
> // CPU1 thread is preempted or
> // something
> hmm_mirror_unregister
> hmm_mirror_kill
> down_write(&hmm->rwsem);
> // mirror removed by CPU1 already
> // so hlist_unhashed returns 1
> up_write(&hmm->rwsem);
>
> hmm_mirror_unref(&mirror);
> // Mirror ref now 1
>
> // CPU2 thread is preempted or
> // something
> // CPU1 thread is scheduled
>
> hmm_mirror_unref(&mirror);
> // Mirror ref now 0, cleanup
> hmm_mirror_destroy(&mirror)
> mutex_lock(&device->mutex);
> list_del_init(&mirror->dlist);
> device->ops->release(mirror);
> kfree(mirror);
> // CPU2 thread is scheduled, now
> // both CPU1 and CPU2 are running
>
> hmm_device_unregister
> mutex_lock(&device->mutex);
> mutex_optimistic_spin()
> mutex_unlock(&device->mutex);
> [...]
> __mutex_unlock_common_slowpath
> // CPU2 releases lock
> atomic_set(&lock->count, 1);
> // Spinning CPU2 acquires now-
> // free lock
> // mutex_lock returns
> // Device list empty
> mutex_unlock(&device->mutex);
> return 0;
> kfree(hmm_device);
> // CPU1 still accessing
> // hmm_device->mutex in
> //__mutex_unlock_common_slowpath
Ok i see the race you are afraid of and really it is an unlikely one
__mutex_unlock_common_slowpath() take a spinlock right after allowing
other to take the mutex, when we are in your scenario there is no
contention on that spinlock so it is taken right away and as there
is no one in the mutex wait list then it goes directly to unlock the
spinlock and return. You can ignore the debug function as if debugging
is enabled than the mutex_lock() would need to also take the spinlock
and thus you would have proper synchronization btw 2 thread thanks to
the mutex.wait_lock.
So basicly while CPU1 is going :
spin_lock(mutex.wait_lock)
if (!list_empty(mutex.wait_list)) {
// wait_list is empty so branch not taken
}
spin_unlock(mutex.wait_lock)
CPU2 would have to test the mirror list and mutex_unlock and return
before the spin_unlock() of CPU1. This is a tight race, i can add a
synchronize_rcu() to device_unregister after the mutex_unlock() so
that we also add a grace period before the device is potentialy freed
which should make that race completely unlikely.
Moreover for something really bad to happen it would need that the
freed memory to be reallocated right away by some other thread. Which
really sound unlikely unless CPU1 is the slowest of all :)
Cheers,
Jérôme
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists