lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2efadddc-ebc0-1cdb-5580-4a9ab5610e61@oracle.com>
Date:   Mon, 19 Nov 2018 12:32:49 -0500
From:   Steven Sistare <steven.sistare@...cle.com>
To:     Valentin Schneider <valentin.schneider@....com>, mingo@...hat.com,
        peterz@...radead.org
Cc:     subhra.mazumdar@...cle.com, dhaval.giani@...cle.com,
        daniel.m.jordan@...cle.com, pavel.tatashin@...rosoft.com,
        matt@...eblueprint.co.uk, umgwanakikbuti@...il.com,
        riel@...hat.com, jbacik@...com, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, quentin.perret@....com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 03/10] sched/topology: Provide cfs_overload_cpus bitmap

On 11/9/2018 12:38 PM, Valentin Schneider wrote:
> Hi Steve,
> 
> On 09/11/2018 12:50, Steve Sistare wrote:
> [...]
>> @@ -482,6 +484,10 @@ static void update_top_cache_domain(int cpu)
>>  	dirty_sched_domain_sysctl(cpu);
>>  	destroy_sched_domains(tmp);
>>  
>> +	sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES);
>> +	cfs_overload_cpus = (sd ? sd->shared->cfs_overload_cpus : NULL);
>> +	rcu_assign_pointer(rq->cfs_overload_cpus, cfs_overload_cpus);
>> +
> 
> Why not do this in update_top_cache_domain() where we also look for the
> highest SD_SHARE_PKG_RESOURCES and setup shortcut pointers?

My snippet needs rq which is currently referenced in cpu_attach_domain() but
not in update_top_cache_domain().  I could just as easily do it in 
update_top_cache_domain().  Either way is fine with me.

>>  	update_top_cache_domain(cpu);
>>  }
>>  
>> @@ -1619,9 +1625,19 @@ static void __sdt_free(const struct cpumask *cpu_map)
>>  	}
>>  }
>>  
>> +#define ZALLOC_MASK(maskp, nelems, node)				  \
>> +	(!*(maskp) && !zalloc_sparsemask_node(maskp, nelems,		  \
>> +					      SPARSEMASK_DENSITY_DEFAULT, \
>> +					      GFP_KERNEL, node))	  \
>> +
>>  static int sd_llc_alloc(struct sched_domain *sd)
>>  {
>> -	/* Allocate sd->shared data here. Empty for now. */
>> +	struct sched_domain_shared *sds = sd->shared;
>> +	struct cpumask *span = sched_domain_span(sd);
>> +	int nid = cpu_to_node(cpumask_first(span));
>> +
>> +	if (ZALLOC_MASK(&sds->cfs_overload_cpus, nr_cpu_ids, nid))
> 
> Mmm so this is called once on every CPU, but the !*(maskp) check in the
> macro makes it so there is only one allocation per sd_llc_shared.
> 
> I wouldn't mind having that called out in a comment, or having the
> pointer check done explicitly outside of the macro.

OK, will add a comment.  I like the macro because the code is cleaner if/when 
multiple sets are created.

- Steve

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ