lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 8 Jan 2021 20:55:27 +0800
From:   Lecopzer Chen <lecopzer.chen@...iatek.com>
To:     <sumit.garg@...aro.org>
CC:     <acme@...nel.org>, <alexander.shishkin@...ux.intel.com>,
        <alexandru.elisei@....com>, <catalin.marinas@....com>,
        <jolsa@...hat.com>, <lecopzer.chen@...iatek.com>,
        <linux-arm-kernel@...ts.infradead.org>,
        <linux-kernel@...r.kernel.org>,
        <linux-mediatek@...ts.infradead.org>, <mark.rutland@....com>,
        <matthias.bgg@...il.com>, <mingo@...hat.com>,
        <namhyung@...nel.org>, <oliver.sang@...el.com>,
        <peterz@...radead.org>, <will@...nel.org>, <yj.chiang@...iatek.com>
Subject: Re: [PATCH v2] arm64: perf: Fix access percpu variables in preemptible context

Hi Sumit,
 
Thanks for your reply.
 
> On Mon, 21 Dec 2020 at 21:53, Lecopzer Chen <lecopzer.chen@...iatek.com> wrote:
> >
> > commit 367c820ef08082 ("arm64: Enable perf events based hard lockup detector")
> > reinitilizes lockup detector after arm64 PMU is initialized and open
> > a window for accessing smp_processor_id() in preemptible context.
> > Since hardlockup_detector_perf_init() always called in init stage
> > with a single cpu, but we initialize lockup detector after the init task
> > is migratable.
> >
> > Fix this by utilizing lockup detector reconfiguration which calls
> > softlockup_start_all() on each cpu and calls watatchdog_nmi_enable() later.
> > Because softlockup_start_all() use IPI call function to make sure
> > watatchdog_nmi_enable() will bind on each cpu and fix this issue.
> 
> IMO, this just creates unnecessary dependency for hardlockup detector
> init via softlockup detector (see the alternative definition of
> lockup_detector_reconfigure()).


The arm64/Kconfig select HAVE_HARDLOCKUP_DETECTOR_PERF if we have NMI:
	select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI

And in lib/Kconfig.debug HARDLOCKUP_DETECTOR select SOFTLOCKUP_DETECTOR automatically.
	config HARDLOCKUP_DETECTOR_PERF
		bool
		select SOFTLOCKUP_DETECTOR

So we don't need to explicitly select softlockup.
And actually this patch is not a perfect solution like you said
(hardlockup depends on softlockup),
but the key point is that lockup_detector_init() seems only design for
using in early init stage and not for calling in later deffered initial process.


> 
> >
> >     BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
> 
> How about just the below fix in order to make CONFIG_DEBUG_PREEMPT happy?
> 
> diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c
> index 247bf0b1582c..db06ee28f48e 100644
> --- a/kernel/watchdog_hld.c
> +++ b/kernel/watchdog_hld.c
> @@ -165,7 +165,7 @@ static void watchdog_overflow_callback(struct
> perf_event *event,
> 
>  static int hardlockup_detector_event_create(void)
>  {
> -       unsigned int cpu = smp_processor_id();
> +       unsigned int cpu = raw_smp_processor_id();
>         struct perf_event_attr *wd_attr;
>         struct perf_event *evt;
> 

This won't solve the issue that arm64 called this in preemptible context,
I was trying to find a balance that can pass CONFIG_DEBUG_PREEMPT and
calling lockup_detector_init() in non-preemptive context.

watchdog_nmi_probe() and the following hardlockup_detector_event_create use
this_cpu_read/write, thus the topic of solution is better to be
'how to call lockup_detector_init() in preemptive context'

we can't just use preempt_disable/enable between lockup_detector_init() because
the call tree inside it will use kamlloc() with GFP_KERNEL which would check by might_sleep()


The v2 is now what I can find to solve this and the smallest change.
But the drawback, again, is hardlockup depends on softlockup.

The other solution may be executed lockup_detector_init in a binded thread
which only bind to one cpu.


BRs,
Lecopzer


> -Sumit
> 
> >     caller is debug_smp_processor_id+0x20/0x2c
> >     CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.10.0+ #276
> >     Hardware name: linux,dummy-virt (DT)
> >     Call trace:
> >       dump_backtrace+0x0/0x3c0
> >       show_stack+0x20/0x6c
> >       dump_stack+0x2f0/0x42c
> >       check_preemption_disabled+0x1cc/0x1dc
> >       debug_smp_processor_id+0x20/0x2c
> >       hardlockup_detector_event_create+0x34/0x18c
> >       hardlockup_detector_perf_init+0x2c/0x134
> >       watchdog_nmi_probe+0x18/0x24
> >       lockup_detector_init+0x44/0xa8
> >       armv8_pmu_driver_init+0x54/0x78
> >       do_one_initcall+0x184/0x43c
> >       kernel_init_freeable+0x368/0x380
> >       kernel_init+0x1c/0x1cc
> >       ret_from_fork+0x10/0x30
> 

Powered by blists - more mailing lists