lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877dln640j.ffs@nanos.tec.linutronix.de>
Date:   Wed, 31 Mar 2021 11:55:56 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Mel Gorman <mgorman@...hsingularity.net>,
        Linux-MM <linux-mm@...ck.org>
Cc:     Linux-RT-Users <linux-rt-users@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Chuck Lever <chuck.lever@...cle.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Matthew Wilcox <willy@...radead.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH 2/6] mm/page_alloc: Convert per-cpu list protection to local_lock

On Mon, Mar 29 2021 at 13:06, Mel Gorman wrote:
> There is a lack of clarity of what exactly local_irq_save/local_irq_restore
> protects in page_alloc.c . It conflates the protection of per-cpu page
> allocation structures with per-cpu vmstat deltas.
>
> This patch protects the PCP structure using local_lock which
> for most configurations is identical to IRQ enabling/disabling.
> The scope of the lock is still wider than it should be but this is
> decreased in later patches. The per-cpu vmstat deltas are protected by
> preempt_disable/preempt_enable where necessary instead of relying on
> IRQ disable/enable.

Yes, this goes into the right direction and I really appreciate the
scoped protection for clarity sake.

>  #ifdef CONFIG_MEMORY_HOTREMOVE
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 8a8f1a26b231..01b74ff73549 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -887,6 +887,7 @@ void cpu_vm_stats_fold(int cpu)
>  
>  		pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
>  
> +		preempt_disable();

What's the reason for the preempt_disable() here? A comment would be
appreciated.

>  		for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
>  			if (pzstats->vm_stat_diff[i]) {
>  				int v;
> @@ -908,6 +909,7 @@ void cpu_vm_stats_fold(int cpu)
>  				global_numa_diff[i] += v;
>  			}
>  #endif
> +		preempt_enable();
>  	}
>  
>  	for_each_online_pgdat(pgdat) {
> @@ -941,6 +943,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats)
>  {
>  	int i;
>  
> +	preempt_disable();

Same here.

>  	for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
>  		if (pzstats->vm_stat_diff[i]) {
>  			int v = pzstats->vm_stat_diff[i];
> @@ -959,6 +962,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats)
>  			atomic_long_add(v, &vm_numa_stat[i]);
>  		}
>  #endif
> +	preempt_enable();

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ