lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 13 Sep 2021 18:03:24 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     yongw.pur@...il.com
Cc:     tj@...nel.org, peterz@...radead.org, wang.yong12@....com.cn,
        cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, yang.yang29@....com.cn
Subject: Re: [PATCH v1] vmpressure: wake up work only when there is
 registration event

On Mon 13-09-21 08:54:01, yongw.pur@...il.com wrote:
> From: wangyong <wang.yong12@....com.cn>
> 
> Use the global variable num_events to record the number of vmpressure
> events registered by the system, and wake up work only when there is
> registration event.
> Usually, the vmpressure event is not registered in the system, this patch
> can avoid waking up work and doing nothing.

How much of an improvement does this bring?

> Refer to Michal Hocko's suggestion:
> https://lore.kernel.org/linux-mm/YR%2FNRJEhPKRQ1r22@dhcp22.suse.cz/

let me also point out that we do have means to implement conditional
branches with a zero overhead. Have a look at static branches.

> Tested-by: Zeal Robot <zealci@....com.cn>
> Signed-off-by: wangyong <wang.yong12@....com.cn>
> ---
>  mm/vmpressure.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/vmpressure.c b/mm/vmpressure.c
> index 76518e4..dfac76b 100644
> --- a/mm/vmpressure.c
> +++ b/mm/vmpressure.c
> @@ -67,6 +67,11 @@ static const unsigned int vmpressure_level_critical = 95;
>   */
>  static const unsigned int vmpressure_level_critical_prio = ilog2(100 / 10);
>  
> +/*
> + * Count the number of vmpressure events registered in the system.
> + */
> +static atomic_t num_events = ATOMIC_INIT(0);
> +
>  static struct vmpressure *work_to_vmpressure(struct work_struct *work)
>  {
>  	return container_of(work, struct vmpressure, work);
> @@ -277,7 +282,7 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
>  		vmpr->tree_reclaimed += reclaimed;
>  		spin_unlock(&vmpr->sr_lock);
>  
> -		if (scanned < vmpressure_win)
> +		if (scanned < vmpressure_win || atomic_read(&num_events) == 0)
>  			return;
>  		schedule_work(&vmpr->work);
>  	} else {

This is a very odd place to put the check on. It is past locks being
held and schedule_work on it's own is unlikely to provide a big
overhead. I would have expected to implement the check at the very
beginning of vmpressure()

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ