[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180418175201.GI4795@pd.tnic>
Date: Wed, 18 Apr 2018 19:52:01 +0200
From: Borislav Petkov <bp@...en8.de>
To: Alexandru Gagniuc <mr.nuke.me@...il.com>
Cc: linux-acpi@...r.kernel.org, linux-edac@...r.kernel.org,
rjw@...ysocki.net, lenb@...nel.org, tony.luck@...el.com,
tbaicar@...eaurora.org, will.deacon@....com, james.morse@....com,
shiju.jose@...wei.com, zjzhang@...eaurora.org,
gengdongjiu@...wei.com, linux-kernel@...r.kernel.org,
alex_gagniuc@...lteam.com, austin_bolen@...l.com,
shyam_iyer@...l.com, devel@...ica.org, mchehab@...nel.org,
robert.moore@...el.com, erik.schmauss@...el.com
Subject: Re: [RFC PATCH v2 2/4] acpi: apei: Split GHES handlers outside of
ghes_do_proc
On Mon, Apr 16, 2018 at 04:59:01PM -0500, Alexandru Gagniuc wrote:
> static void ghes_do_proc(struct ghes *ghes,
> const struct acpi_hest_generic_status *estatus)
> {
> int sev, sec_sev;
> struct acpi_hest_generic_data *gdata;
> + const struct ghes_handler *handler;
> guid_t *sec_type;
> guid_t *fru_id = &NULL_UUID_LE;
> char *fru_text = "";
> @@ -478,21 +537,10 @@ static void ghes_do_proc(struct ghes *ghes,
> if (gdata->validation_bits & CPER_SEC_VALID_FRU_TEXT)
> fru_text = gdata->fru_text;
>
> - if (guid_equal(sec_type, &CPER_SEC_PLATFORM_MEM)) {
> - struct cper_sec_mem_err *mem_err = acpi_hest_get_payload(gdata);
> -
> - ghes_edac_report_mem_error(sev, mem_err);
> -
> - arch_apei_report_mem_error(sev, mem_err);
> - ghes_handle_memory_failure(gdata, sev);
> - }
> - else if (guid_equal(sec_type, &CPER_SEC_PCIE)) {
> - ghes_handle_aer(gdata);
> - }
> - else if (guid_equal(sec_type, &CPER_SEC_PROC_ARM)) {
> - struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
>
> - log_arm_hw_error(err);
> + handler = get_handler(sec_type);
I don't like this - it was better and more readable before because I can
follow which handler gets called. This change makes is less readable.
--
Regards/Gruss,
Boris.
Good mailing practices for 400: avoid top-posting and trim the reply.
Powered by blists - more mailing lists