lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8de398fa-61c6-368c-33fc-a3fbfd25d881@linux.intel.com>
Date:   Tue, 5 Jan 2021 10:40:11 +0800
From:   "Jin, Yao" <yao.jin@...ux.intel.com>
To:     Jiri Olsa <jolsa@...hat.com>
Cc:     acme@...nel.org, jolsa@...nel.org, peterz@...radead.org,
        mingo@...hat.com, alexander.shishkin@...ux.intel.com,
        Linux-kernel@...r.kernel.org, ak@...ux.intel.com,
        kan.liang@...el.com, yao.jin@...el.com, ying.huang@...el.com
Subject: Re: [PATCH v3] perf stat: Fix wrong skipping for per-die aggregation

Hi Jiri,

On 1/4/2021 6:15 PM, Jiri Olsa wrote:
> On Fri, Dec 25, 2020 at 09:04:09AM +0800, Jin Yao wrote:
> 
> SNIP
> 
>>   void update_stats(struct stats *stats, u64 val)
>> @@ -275,16 +276,39 @@ void evlist__save_aggr_prev_raw_counts(struct evlist *evlist)
>>   
>>   static void zero_per_pkg(struct evsel *counter)
>>   {
>> -	if (counter->per_pkg_mask)
>> -		memset(counter->per_pkg_mask, 0, cpu__max_cpu());
>> +	struct hashmap_entry *entry;
>> +	size_t bkt;
>> +
>> +	if (counter->per_pkg_mask) {
>> +		hashmap__for_each_entry(counter->per_pkg_mask, entry, bkt) {
>> +			bool *used = (bool *)entry->value;
>> +
>> +			*used = false;
>> +		}
>> +	}
>> +}
>> +
>> +static size_t id_hash(const void *key, void *ctx __maybe_unused)
>> +{
>> +	int socket = (int64_t)key >> 32;
>> +
>> +	return socket;
>> +}
>> +
>> +static bool id_equal(const void *key1, const void *key2,
>> +		     void *ctx __maybe_unused)
>> +{
>> +	return (int64_t)key1 == (int64_t)key2;
>>   }
> 
> please use more descriptive names, pkg_id_hash/pkg_id_equal or such
> 

Corrected in v4.

>>   
>>   static int check_per_pkg(struct evsel *counter,
>>   			 struct perf_counts_values *vals, int cpu, bool *skip)
>>   {
>> -	unsigned long *mask = counter->per_pkg_mask;
>> +	struct hashmap *mask = counter->per_pkg_mask;
>>   	struct perf_cpu_map *cpus = evsel__cpus(counter);
>> -	int s;
>> +	int s, d, ret;
>> +	uint64_t key;
>> +	bool *used;
>>   
>>   	*skip = false;
>>   
>> @@ -295,7 +319,7 @@ static int check_per_pkg(struct evsel *counter,
>>   		return 0;
>>   
>>   	if (!mask) {
>> -		mask = zalloc(cpu__max_cpu());
>> +		mask = hashmap__new(id_hash, id_equal, NULL);
>>   		if (!mask)
>>   			return -ENOMEM;
>>   
>> @@ -317,7 +341,32 @@ static int check_per_pkg(struct evsel *counter,
>>   	if (s < 0)
>>   		return -1;
>>   
>> -	*skip = test_and_set_bit(s, mask) == 1;
>> +	/*
>> +	 * On multi-die system, 0 < die_id < 256. On no-die system, die_id = 0.
>> +	 * We use hashmap(socket, die) to check the used socket+die pair.
>> +	 */
>> +	d = cpu_map__get_die(cpus, cpu, NULL).die;
>> +	if (d < 0)
>> +		return -1;
>> +
>> +	key = (uint64_t)s << 32 | (d & 0xff);
>> +	if (hashmap__find(mask, (void *)key, (void **)&used)) {
>> +		if (*used)
>> +			*skip = true;
>> +		*used = true;
>> +	} else {
>> +		used = zalloc(sizeof(*used));
>> +		if (!used)
>> +			return -1;
> 
> hum, what's the point of having extra bool value? once the
> item is in the hashtab, we have the answer
> 
> I think you can add item to hashtab with '1' value and get
> rid of that bool allocation
> 
> zero_per_pkg will be just removing all items from hashtab
> 
> jirka
> 

Thanks for the suggestion! Yes, we don't need the bool value allocation here, it's unnecessary.

I just post the v4. Please help to take a look.

Thanks
Jin Yao

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ