lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 4 Jun 2018 21:28:21 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Rik van Riel <riel@...riel.com>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 13/19] mm/migrate: Use xchg instead of spinlock

On Mon, Jun 04, 2018 at 03:30:22PM +0530, Srikar Dronamraju wrote:
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 8c0af0f..1c55956 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1874,11 +1874,9 @@ static bool numamigrate_update_ratelimit(pg_data_t *pgdat,
>  	 * all the time is being spent migrating!
>  	 */
>  	if (time_after(jiffies, pgdat->numabalancing_migrate_next_window)) {
> -		spin_lock(&pgdat->numabalancing_migrate_lock);
> -		pgdat->numabalancing_migrate_nr_pages = 0;
> -		pgdat->numabalancing_migrate_next_window = jiffies +
> -			msecs_to_jiffies(migrate_interval_millisecs);
> -		spin_unlock(&pgdat->numabalancing_migrate_lock);
> +		if (xchg(&pgdat->numabalancing_migrate_nr_pages, 0))
> +			pgdat->numabalancing_migrate_next_window = jiffies +
> +				msecs_to_jiffies(migrate_interval_millisecs);

Note that both are in fact wrong. That wants to be something like:

	pgdat->numabalancing_migrate_next_window += interval;

Otherwise you stretch every interval by 'jiffies - numabalancing_migrate_next_window'.

Also, that all wants READ_ONCE/WRITE_ONCE, irrespective of the
spinlock/xchg.

I suppose the problem here is that PPC has a very nasty test-and-set
spinlock with fwd progress issues while xchg maps to a fairly simple
ll/sc that (hopefully) has some hardware fairness.

And pgdata being a rather course data structure (per node?) there could
be a lot of CPUs stomping on this here thing.

So simpler not really, but better for PPC.

>  	}
>  	if (pgdat->numabalancing_migrate_nr_pages > ratelimit_pages) {
>  		trace_mm_numa_migrate_ratelimit(current, pgdat->node_id,
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4526643..464a25c 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6208,7 +6208,6 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat)
>  
>  	pgdat_resize_init(pgdat);
>  #ifdef CONFIG_NUMA_BALANCING
> -	spin_lock_init(&pgdat->numabalancing_migrate_lock);
>  	pgdat->numabalancing_migrate_nr_pages = 0;
>  	pgdat->active_node_migrate = 0;
>  	pgdat->numabalancing_migrate_next_window = jiffies;
> -- 
> 1.8.3.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ