lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YQi6lOT6j2DtOGlT@carbon.dhcp.thefacebook.com>
Date:   Mon, 2 Aug 2021 20:40:04 -0700
From:   Roman Gushchin <guro@...com>
To:     Miaohe Lin <linmiaohe@...wei.com>
CC:     Michal Hocko <mhocko@...e.com>, <hannes@...xchg.org>,
        <vdavydov.dev@...il.com>, <akpm@...ux-foundation.org>,
        <shakeelb@...gle.com>, <willy@...radead.org>, <alexs@...nel.org>,
        <richard.weiyang@...il.com>, <songmuchun@...edance.com>,
        <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        <cgroups@...r.kernel.org>
Subject: Re: [PATCH 2/5] mm, memcg: narrow the scope of percpu_charge_mutex

On Sat, Jul 31, 2021 at 10:29:52AM +0800, Miaohe Lin wrote:
> On 2021/7/30 14:50, Michal Hocko wrote:
> > On Thu 29-07-21 20:06:45, Roman Gushchin wrote:
> >> On Thu, Jul 29, 2021 at 08:57:52PM +0800, Miaohe Lin wrote:
> >>> Since percpu_charge_mutex is only used inside drain_all_stock(), we can
> >>> narrow the scope of percpu_charge_mutex by moving it here.
> >>>
> >>> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
> >>> ---
> >>>  mm/memcontrol.c | 2 +-
> >>>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> >>> index 6580c2381a3e..a03e24e57cd9 100644
> >>> --- a/mm/memcontrol.c
> >>> +++ b/mm/memcontrol.c
> >>> @@ -2050,7 +2050,6 @@ struct memcg_stock_pcp {
> >>>  #define FLUSHING_CACHED_CHARGE	0
> >>>  };
> >>>  static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
> >>> -static DEFINE_MUTEX(percpu_charge_mutex);
> >>>  
> >>>  #ifdef CONFIG_MEMCG_KMEM
> >>>  static void drain_obj_stock(struct obj_stock *stock);
> >>> @@ -2209,6 +2208,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
> >>>   */
> >>>  static void drain_all_stock(struct mem_cgroup *root_memcg)
> >>>  {
> >>> +	static DEFINE_MUTEX(percpu_charge_mutex);
> >>>  	int cpu, curcpu;
> >>
> >> It's considered a good practice to protect data instead of code paths. After
> >> the proposed change it becomes obvious that the opposite is done here: the mutex
> >> is used to prevent a simultaneous execution of the code of the drain_all_stock()
> >> function.
> > 
> > The purpose of the lock was indeed to orchestrate callers more than any
> > data structure consistency.
> >  
> >> Actually we don't need a mutex here: nobody ever sleeps on it. So I'd replace
> >> it with a simple atomic variable or even a single bitfield. Then the change will
> >> be better justified, IMO.
> > 
> > Yes, mutex can be replaced by an atomic in a follow up patch.
> > 
> 
> Thanks for both of you. It's a really good suggestion. What do you mean is something like below?
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 616d1a72ece3..508a96e80980 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2208,11 +2208,11 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>   */
>  static void drain_all_stock(struct mem_cgroup *root_memcg)
>  {
> -       static DEFINE_MUTEX(percpu_charge_mutex);
>         int cpu, curcpu;
> +       static atomic_t drain_all_stocks = ATOMIC_INIT(-1);
> 
>         /* If someone's already draining, avoid adding running more workers. */
> -       if (!mutex_trylock(&percpu_charge_mutex))
> +       if (!atomic_inc_not_zero(&drain_all_stocks))
>                 return;

It should work, but why not a simple atomic_cmpxchg(&drain_all_stocks, 0, 1) and
initialize it to 0? Maybe it's just my preference, but IMO (0, 1) is easier
to understand than (-1, 0) here. Not a strong opinion though, up to you.

Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ