[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a86b81d8-c842-b72a-16d7-f894a469eb91@canonical.com>
Date: Tue, 20 Apr 2021 16:34:16 +0100
From: Colin Ian King <colin.king@...onical.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Mark Rutland <mark.rutland@....com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>, x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>,
George Dunlap <george.dunlap@...citrix.com>,
kernel-janitors@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] perf/x86: Fix integer overflow when left shifting an
integer more than 32 bits
On 20/04/2021 16:31, Peter Zijlstra wrote:
> On Tue, Apr 20, 2021 at 05:03:03PM +0200, Peter Zijlstra wrote:
>> On Tue, Apr 20, 2021 at 03:29:07PM +0100, Colin King wrote:
>>> From: Colin Ian King <colin.king@...onical.com>
>>>
>>> The 64 bit value read from MSR_ARCH_PERFMON_FIXED_CTR_CTRL is being
>>> bit-wise masked with the value (0x03 << i*4). However, the shifted value
>>> is evaluated using 32 bit arithmetic, so will overflow when i > 8.
>>> Fix this by making 0x03 a ULL so that the shift is performed using
>>> 64 bit arithmetic.
>>>
>>> Addresses-Coverity: ("Unintentional integer overflow")
>>
>> Strange tag that, also inaccurate, wide shifts are UB and don't behave
>> consistently.
>>
>> As is, we've not had hardware with that many fixed counters, but yes,
>> worth fixing I suppose.
>
> Patch now reads:
>
> ---
> Subject: perf/x86: Allow for 8<num_fixed_counters<16
> From: Colin Ian King <colin.king@...onical.com>
> Date: Tue, 20 Apr 2021 15:29:07 +0100
>
> From: Colin Ian King <colin.king@...onical.com>
>
> The 64 bit value read from MSR_ARCH_PERFMON_FIXED_CTR_CTRL is being
> bit-wise masked with the value (0x03 << i*4). However, the shifted value
> is evaluated using 32 bit arithmetic, so will UB when i > 8. Fix this
> by making 0x03 a ULL so that the shift is performed using 64 bit
> arithmetic.
>
> This makes the arithmetic internally consistent and preparers for the
> day when hardware provides 8<num_fixed_counters<16.
Yep, that's good. Thanks.
>
> Signed-off-by: Colin Ian King <colin.king@...onical.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> Link: https://lkml.kernel.org/r/20210420142907.382417-1-colin.king@canonical.com
> ---
> arch/x86/events/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -261,7 +261,7 @@ static bool check_hw_exists(void)
> for (i = 0; i < x86_pmu.num_counters_fixed; i++) {
> if (fixed_counter_disabled(i))
> continue;
> - if (val & (0x03 << i*4)) {
> + if (val & (0x03ULL << i*4)) {
> bios_fail = 1;
> val_fail = val;
> reg_fail = reg;
>
Powered by blists - more mailing lists