lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 4 Jun 2024 17:46:35 +0200
From: Borislav Petkov <bp@...en8.de>
To: Yazen Ghannam <yazen.ghannam@....com>
Cc: linux-edac@...r.kernel.org, linux-kernel@...r.kernel.org,
	tony.luck@...el.com, x86@...nel.org, avadhut.naik@....com,
	john.allen@....com
Subject: Re: [PATCH 8/9] x86/mce/amd: Enable interrupt vectors once per-CPU
 on SMCA systems

On Thu, May 23, 2024 at 10:56:40AM -0500, Yazen Ghannam wrote:
>  static bool thresholding_irq_en;
>  static DEFINE_PER_CPU_READ_MOSTLY(mce_banks_t, mce_thr_intr_banks);
>  static DEFINE_PER_CPU_READ_MOSTLY(mce_banks_t, mce_dfr_intr_banks);
> +static DEFINE_PER_CPU_READ_MOSTLY(bool, smca_thr_intr_enabled);
> +static DEFINE_PER_CPU_READ_MOSTLY(bool, smca_dfr_intr_enabled);

So before you add those, we already have:

static DEFINE_PER_CPU_READ_MOSTLY(struct smca_bank[MAX_NR_BANKS], smca_banks);
static DEFINE_PER_CPU_READ_MOSTLY(u8[N_SMCA_BANK_TYPES], smca_bank_counts);
static DEFINE_PER_CPU(struct threshold_bank **, threshold_banks);
static DEFINE_PER_CPU(u64, bank_map);
static DEFINE_PER_CPU(u64, smca_misc_banks_map);

Please think of a proper struct which collects all that info in the
smallest possible format and unify everything.

It is a mess currently.

> +/*
> + * Enable the APIC LVT interrupt vectors once per-CPU. This should be done before hardware is
> + * ready to send interrupts.
> + *
> + * Individual error sources are enabled later during per-bank init.
> + */
> +static void smca_enable_interrupt_vectors(struct cpuinfo_x86 *c)
> +{
> +	u8 thr_offset, dfr_offset;
> +	u64 mca_intr_cfg;
> +
> +	if (!mce_flags.smca || !mce_flags.succor)
> +		return;
> +
> +	if (c == &boot_cpu_data) {
> +		mce_threshold_vector		= amd_threshold_interrupt;
> +		deferred_error_int_vector	= amd_deferred_error_interrupt;
> +	}

Nah, this should be done differently: you define a function
cpu_mca_init() which you call from early_identify_cpu(). In it, you do
the proper checks and assign those two vectors above. That in
a pre-patch.

Then, the rest becomes per-CPU code which you simply run in
mce_amd_feature_init(), dilligently, one thing after the other.

And then you don't need smca_{dfr,thr}_intr_enabled anymore because you
know that after having run setup_APIC_eilvt().

IOW, mce_amd_feature_init() does *all* per-CPU MCA init on AMD and it is
all concentrated in one place and not spread around.

I think this should be a much better cleanup.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ