lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 5 Jun 2023 09:59:57 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Marcelo Tosatti <mtosatti@...hat.com>
Cc:     Christoph Lameter <cl@...ux.com>,
        Aaron Tomlin <atomlin@...mlin.com>,
        Frederic Weisbecker <frederic@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs

On Fri 02-06-23 15:58:00, Marcelo Tosatti wrote:
> The interruption caused by queueing work on nohz_full CPUs 
> is undesirable for certain aplications.

This is not a proper changelog. I am not going to write a changelog for
you this time. Please explain why this is really needed and why this
approach is desired. E.g. why don't you prevent userspace from
refreshing stats if interference is not desirable. Also would it make
some sense to reduce flushing to cpumask of the calling process?
(certainly a daring thought but have you even considered it?)

> Fix by not refreshing per-CPU stats of nohz_full CPUs. 
> 
> Signed-off-by: Marcelo Tosatti <mtosatti@...hat.com>
> 
> ---
> v2: opencode schedule_on_each_cpu (Michal Hocko)
> 
> Index: linux-vmstat-remote/mm/vmstat.c
> ===================================================================
> --- linux-vmstat-remote.orig/mm/vmstat.c
> +++ linux-vmstat-remote/mm/vmstat.c
> @@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab
>  		   void *buffer, size_t *lenp, loff_t *ppos)
>  {
>  	long val;
> -	int err;
>  	int i;
> +	int cpu;
> +	struct work_struct __percpu *works;
> +
> +	works = alloc_percpu(struct work_struct);
> +	if (!works)
> +		return -ENOMEM;
>  
>  	/*
>  	 * The regular update, every sysctl_stat_interval, may come later
> @@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab
>  	 * transiently negative values, report an error here if any of
>  	 * the stats is negative, so we know to go looking for imbalance.
>  	 */
> -	err = schedule_on_each_cpu(refresh_vm_stats);
> -	if (err)
> -		return err;
> +	cpus_read_lock();
> +	for_each_online_cpu(cpu) {
> +		struct work_struct *work;
> +
> +		if (cpu_is_isolated(cpu))
> +			continue;
> +		work = per_cpu_ptr(works, cpu);
> +		INIT_WORK(work, refresh_vm_stats);
> +		schedule_work_on(cpu, work);
> +	}
> +
> +	for_each_online_cpu(cpu) {
> +		if (cpu_is_isolated(cpu))
> +			continue;
> +		flush_work(per_cpu_ptr(works, cpu));
> +	}
> +	cpus_read_unlock();
> +	free_percpu(works);
>  	for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
>  		/*
>  		 * Skip checking stats known to go negative occasionally.
> 

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ