[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <290d5967-9929-b0df-a3db-755c102f6599@redhat.com>
Date: Mon, 14 Dec 2020 10:05:28 -0500
From: Waiman Long <longman@...hat.com>
To: David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Uladzislau Rezki (Sony)" <urezki@...il.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/vmalloc: Fix unlock order in s_stop()
On 12/14/20 4:39 AM, David Hildenbrand wrote:
> On 13.12.20 19:08, Waiman Long wrote:
>> When multiple locks are acquired, they should be released in reverse
>> order. For s_start() and s_stop() in mm/vmalloc.c, that is not the
>> case.
>>
>> s_start: mutex_lock(&vmap_purge_lock); spin_lock(&vmap_area_lock);
>> s_stop : mutex_unlock(&vmap_purge_lock); spin_unlock(&vmap_area_lock);
>>
>> This unlock sequence, though allowed, is not optimal. If a waiter is
>> present, mutex_unlock() will need to go through the slowpath of waking
>> up the waiter with preemption disabled. Fix that by releasing the
>> spinlock first before the mutex.
>>
>> Fixes: e36176be1c39 ("mm/vmalloc: rework vmap_area_lock")
> I'm not sure if this classifies as "Fixes". As you correctly state "is
> not optimal". But yeah, releasing a spinlock after releasing a mutex
> looks weird already.
>
Yes, it may not be technically a real bug fix. However, the order just
doesn't look right. That is why I sent out a patch to address that.
Cheers,
Longman
Powered by blists - more mailing lists