lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8d826b53a3fc453ba1c468aaf8eb2e75@huawei.com>
Date:   Tue, 6 Oct 2020 16:13:56 +0000
From:   Shiju Jose <shiju.jose@...wei.com>
To:     James Morse <james.morse@....com>
CC:     Borislav Petkov <bp@...en8.de>,
        "linux-edac@...r.kernel.org" <linux-edac@...r.kernel.org>,
        "linux-acpi@...r.kernel.org" <linux-acpi@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "tony.luck@...el.com" <tony.luck@...el.com>,
        "rjw@...ysocki.net" <rjw@...ysocki.net>,
        "lenb@...nel.org" <lenb@...nel.org>,
        Linuxarm <linuxarm@...wei.com>,
        Jonathan Cameron <jonathan.cameron@...wei.com>
Subject: RE: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on
 short time period

Hi James,

Thanks for the reply and the information shared.

>-----Original Message-----
>From: James Morse [mailto:james.morse@....com]
>Sent: 02 October 2020 18:33
>To: Shiju Jose <shiju.jose@...wei.com>
>Cc: Borislav Petkov <bp@...en8.de>; linux-edac@...r.kernel.org; linux-
>acpi@...r.kernel.org; linux-kernel@...r.kernel.org; tony.luck@...el.com;
>rjw@...ysocki.net; lenb@...nel.org; Linuxarm <linuxarm@...wei.com>
>Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on
>short time period
>
>Hi Shiju,
>
>On 02/10/2020 16:38, Shiju Jose wrote:
>>> -----Original Message-----
>>> From: Borislav Petkov [mailto:bp@...en8.de]
>>> Sent: 02 October 2020 13:44
>>> To: Shiju Jose <shiju.jose@...wei.com>
>>> Cc: linux-edac@...r.kernel.org; linux-acpi@...r.kernel.org; linux-
>>> kernel@...r.kernel.org; tony.luck@...el.com; rjw@...ysocki.net;
>>> james.morse@....com; lenb@...nel.org; Linuxarm
><linuxarm@...wei.com>
>>> Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count
>>> check on short time period
>>>
>>> On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote:
>>>> Open Questions based on the feedback from Boris, 1. ARM processor
>>>> error types are cache/TLB/bus errors.
>>>>    [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec
>>>> v2.8] Any of the above error types should not be consider for the
>>>> error collection and CPU core isolation?
>
>Boris' earlier example was that Bus errors have very little to do with the CPU.
>It may just be that this CPU is handling the IRQs for a fault device, and thus
>receiving the errors. irqbalance could change that anytime.
>
>I'd prefer we just stick with the caches for now.
>
[...]

>
>>> Open question from James with my reply to it:
>>>
>>> On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote:
>>>> If the corrected-count is available somewhere, can't this policy be
>>>> made in user-space?
>
>> The error count is present in the struct cper_arm_err_info, the fields
>> of this structure  are not reported to the user-space through trace events?
>
>> Presently the fields of table struct cper_sec_proc_arm only are
>> reported to the user-space through trace-arm-event.
>> Also there can be multiple cper_arm_err_info per cper_sec_proc_arm.
>> Thus I think this need reporting through a new trace event?
>
>I think it would be more useful to feed this into edac like ghes.c already does
>for memory errors. These would end up as corrected errors counts on devices
>for L3 or whatever.
>
>This saves fixing your user-space component to the arm specific CPER record
>format, or even firmware-first, meaning its useful to the widest number of
>people.
>
>
>> Also the logical index of a CPU which I think need to extract from the
>'mpidr' field of
>> struct cper_sec_proc_arm using platform dependent kernel function
>get_logical_index().
>> Thus cpu index also need to report to the user space.
>
>I thought you were talking about caches. These structures have a 'level' for
>cache errors.
>
>Certainly you need a way of knowing which cache it is, and from that number
>you should also be able to work out which the CPUs it is attached to.
>
>x86 already has a way of doing this:
>https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Docu
>mentation/x86/resctrl_ui.rst#n327
>
>arm64 doesn't have anything equivalent, but my current proposal for MPAM
>is here:
>https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git/commit/?h=
>mpam/snapshot/feb&id=ce3148bd39509ac8b12f5917f0f92ce014a5b22f
>
>I was hoping the PPTT table would grow something we could use as an ID, but
>I've not seen anything yet.

Please find following pseudo code we added for the kernel side to make sure
we correctly understand your suggestions.

1. Create edac device and edac device sysfs entries for the online CPU caches.
/drivers/edac/edac_device.c
struct edac_device_ctl_info  *edac_device_add_cache(unsigned int id, u8 level, u8 type) {
	...
	/* Check edac entry for cache already present */       
	edev_cache = find_edac_device_cache(id, level, type);
	if (edev_cache)
		return edev_cache;
 
	edev_cache = edac_device_alloc_ctrl_info(...);
 	if (!edev_cache)
		return NULL;

	rc = edac_device_add_device(edev_cache);
 	if (rc)
		goto exit;

 	/* store edev_cache for future use */
 	...
	return edev_cache;

 exit:
	...
	return NULL; 
 }

/drivers/base/cacheinfo.c
int cache_create_edac_entries(u64 mpidr, u8 cache_level, u8 cache_type)
{ 
	...
	/* Get cacheinfo for each online cpus */
	for_each_online_cpu(i) {
		struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i);
		if (!cpu_ci || !cpu_ci->id)
			continue;
        		... 
		/*Add  the edac entry for the CPU cache */
		edev_cache = edac_device_add_cache(cpu_ci->id, cpu_ci ->level, cpu_ci ->type)
		if (!edev_cache)
			break;
		...
	}
	...	
}
     
unsigned int cache_get_cache_id(u64 proc_id, u8 cache_level, u8 cache_type)
{ 
	unsigned int cache_id = 0;
	...
	/* Walk looking for matching cache node */   
	for_each_online_cpu(i) {
		struct cpu_cacheinfo *cpu_ci = get_cpu_cacheinfo(i);
		if (!cpu_ci || !cpu_ci->id)
			continue;

		id = CONV(proc_id);  /* need to check */
		if((id == cpu_ci->id) && (cache_level == cpu_ci->level) && (cache_type == cpu_ci->type))  {
			cache_id = cpu_ci->id;
			break;
		}
	}
	return cache_id;
}

2. Store CPU CE count in the edac sysfs entry for the CPU cache.

drivers/edac/ghes_edac.c
void ghes_edac_report_cpu_error(int cache_id, u8 cache_level, u8 cache_type , uint32 ce_count)
{
	...
	/* Check edac entry for cache already present, if not add new entry */       
	edev_cache = find_edac_device_cache(cache_id, cache_level, cache_type);
	if (!edev_cache) {
		/*Add  the edac entry for the cache */
		edev_cache = edac_device_add_cache(cache_id, cache_level, cache_type);
		if (!edev_cache)
			return;
	}

	/* Store the ce_count to /sys/devices/system/edac/ cpu/cpu<no>/L<N>cache/ce_count */
	edac_device_handle_ce_count(edev_cache, ce_count, ...)
}
 
drivers/acpi/apei/ghes.c
void ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata) {
 	...
 	if (sec_sev != GHES_SEV_CORRECTED)
 		return;
 	mpidr = cper_sec_proc_arm->mpidr;    
 	for(i = 0; i < cper_sec_proc_arm->err_info_num; i++) {
 		if(cper_sec_proc_info->type != CPER_ARM_CACHE_ERROR) 
 			continue; 
 		ce_count = cper_arm_err_info->multiple_error + 1;
		cache_type = cper_arm_err_info->type;
		cache_level = cper_arm_err_info->error_info<24: 22>;  
		cache_id = cache_get_cache_id(mpidr, cache_level, cache_type);
 		if (!cache_id)
 			return;
		ghes_edac_report_cpu_error(cache_id, cache_level, cache_type , ce_count);
	}
              ...
	return;	
}

>
>
>>> You mean rasdaemon goes and offlines CPUs when certain thresholds are
>>> reached? Sure. It would be much more flexible too.
>
[...]
>
>
>Thanks,
>
>James

Thanks,
Shiju

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ