[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <702c05c6-fd8b-e1de-21e7-4be5b206958a@huawei.com>
Date: Tue, 3 Aug 2021 18:50:28 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Muchun Song <songmuchun@...edance.com>,
Michal Hocko <mhocko@...e.com>, Roman Gushchin <guro@...com>
CC: Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeelb@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Alex Shi <alexs@...nel.org>,
Wei Yang <richard.weiyang@...il.com>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Cgroups <cgroups@...r.kernel.org>
Subject: Re: [PATCH 2/5] mm, memcg: narrow the scope of percpu_charge_mutex
On 2021/8/3 17:33, Muchun Song wrote:
> On Tue, Aug 3, 2021 at 2:29 PM Miaohe Lin <linmiaohe@...wei.com> wrote:
>>
>> On 2021/8/3 11:40, Roman Gushchin wrote:
>>> On Sat, Jul 31, 2021 at 10:29:52AM +0800, Miaohe Lin wrote:
>>>> On 2021/7/30 14:50, Michal Hocko wrote:
>>>>> On Thu 29-07-21 20:06:45, Roman Gushchin wrote:
>>>>>> On Thu, Jul 29, 2021 at 08:57:52PM +0800, Miaohe Lin wrote:
>>>>>>> Since percpu_charge_mutex is only used inside drain_all_stock(), we can
>>>>>>> narrow the scope of percpu_charge_mutex by moving it here.
>>>>>>>
>>>>>>> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
>>>>>>> ---
>>>>>>> mm/memcontrol.c | 2 +-
>>>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>>>>>> index 6580c2381a3e..a03e24e57cd9 100644
>>>>>>> --- a/mm/memcontrol.c
>>>>>>> +++ b/mm/memcontrol.c
>>>>>>> @@ -2050,7 +2050,6 @@ struct memcg_stock_pcp {
>>>>>>> #define FLUSHING_CACHED_CHARGE 0
>>>>>>> };
>>>>>>> static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
>>>>>>> -static DEFINE_MUTEX(percpu_charge_mutex);
>>>>>>>
>>>>>>> #ifdef CONFIG_MEMCG_KMEM
>>>>>>> static void drain_obj_stock(struct obj_stock *stock);
>>>>>>> @@ -2209,6 +2208,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>>>>>>> */
>>>>>>> static void drain_all_stock(struct mem_cgroup *root_memcg)
>>>>>>> {
>>>>>>> + static DEFINE_MUTEX(percpu_charge_mutex);
>>>>>>> int cpu, curcpu;
>>>>>>
>>>>>> It's considered a good practice to protect data instead of code paths. After
>>>>>> the proposed change it becomes obvious that the opposite is done here: the mutex
>>>>>> is used to prevent a simultaneous execution of the code of the drain_all_stock()
>>>>>> function.
>>>>>
>>>>> The purpose of the lock was indeed to orchestrate callers more than any
>>>>> data structure consistency.
>>>>>
>>>>>> Actually we don't need a mutex here: nobody ever sleeps on it. So I'd replace
>>>>>> it with a simple atomic variable or even a single bitfield. Then the change will
>>>>>> be better justified, IMO.
>>>>>
>>>>> Yes, mutex can be replaced by an atomic in a follow up patch.
>>>>>
>>>>
>>>> Thanks for both of you. It's a really good suggestion. What do you mean is something like below?
>>>>
>>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>>> index 616d1a72ece3..508a96e80980 100644
>>>> --- a/mm/memcontrol.c
>>>> +++ b/mm/memcontrol.c
>>>> @@ -2208,11 +2208,11 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>>>> */
>>>> static void drain_all_stock(struct mem_cgroup *root_memcg)
>>>> {
>>>> - static DEFINE_MUTEX(percpu_charge_mutex);
>>>> int cpu, curcpu;
>>>> + static atomic_t drain_all_stocks = ATOMIC_INIT(-1);
>>>>
>>>> /* If someone's already draining, avoid adding running more workers. */
>>>> - if (!mutex_trylock(&percpu_charge_mutex))
>>>> + if (!atomic_inc_not_zero(&drain_all_stocks))
>>>> return;
>>>
>>> It should work, but why not a simple atomic_cmpxchg(&drain_all_stocks, 0, 1) and
>>> initialize it to 0? Maybe it's just my preference, but IMO (0, 1) is easier
>>> to understand than (-1, 0) here. Not a strong opinion though, up to you.
>>>
>>
>> I think this would improve the readability. What you mean is something like below ?
>>
>> Many thanks.
>>
>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>> index 616d1a72ece3..6210b1124929 100644
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -2208,11 +2208,11 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>> */
>> static void drain_all_stock(struct mem_cgroup *root_memcg)
>> {
>> - static DEFINE_MUTEX(percpu_charge_mutex);
>> int cpu, curcpu;
>> + static atomic_t drainer = ATOMIC_INIT(0);
>>
>> /* If someone's already draining, avoid adding running more workers. */
>> - if (!mutex_trylock(&percpu_charge_mutex))
>> + if (atomic_cmpxchg(&drainer, 0, 1) != 0)
>
> I'd like to use atomic_cmpxchg_acquire() here.
>
>> return;
>> /*
>> * Notify other cpus that system-wide "drain" is running
>> @@ -2244,7 +2244,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
>> }
>> }
>> put_cpu();
>> - mutex_unlock(&percpu_charge_mutex);
>> + atomic_set(&drainer, 0);
>
> So use atomic_set_release() here to cooperate with
> atomic_cmpxchg_acquire().
I think this will work well. Many thanks!
>
> Thanks.
>
>> }
>>
>>> Thanks!
>>> .
>>>
>>
> .
>
Powered by blists - more mailing lists