[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210105171054.7s2ggrlbsod7pigo@liuwe-devbox-debian-v2>
Date: Tue, 5 Jan 2021 17:10:54 +0000
From: Wei Liu <wei.liu@...nel.org>
To: Michael Kelley <mikelley@...rosoft.com>
Cc: Wei Liu <wei.liu@...nel.org>, Sasha Levin <sashal@...nel.org>,
vkuznets <vkuznets@...hat.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>,
"hpa@...or.com" <hpa@...or.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stable@...nel.org" <stable@...nel.org>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>
Subject: Re: [PATCH] x86/hyper-v: guard against cpu mask changes in
hyperv_flush_tlb_others()
On Tue, Jan 05, 2021 at 04:59:10PM +0000, Michael Kelley wrote:
> From: Wei Liu <wei.liu@...nel.org> Sent: Monday, October 5, 2020 7:59 AM
> >
> > On Sat, Oct 03, 2020 at 05:40:15PM +0000, Michael Kelley wrote:
> > > From: Sasha Levin <sashal@...nel.org> Sent: Thursday, October 1, 2020 6:04 AM
> > > >
> > > > On Thu, Oct 01, 2020 at 11:53:59AM +0000, Wei Liu wrote:
> > > > >On Thu, Oct 01, 2020 at 11:40:04AM +0200, Vitaly Kuznetsov wrote:
> > > > >> Sasha Levin <sashal@...nel.org> writes:
> > > > >>
> > > > >> > cpumask can change underneath us, which is generally safe except when we
> > > > >> > call into hv_cpu_number_to_vp_number(): if cpumask ends up empty we pass
> > > > >> > num_cpu_possible() into hv_cpu_number_to_vp_number(), causing it to read
> > > > >> > garbage. As reported by KASAN:
> > > > >> >
> > > > >> > [ 83.504763] BUG: KASAN: slab-out-of-bounds in hyperv_flush_tlb_others
> > > > (include/asm-generic/mshyperv.h:128 arch/x86/hyperv/mmu.c:112)
> > > > >> > [ 83.908636] Read of size 4 at addr ffff888267c01370 by task kworker/u8:2/106
> > > > >> > [ 84.196669] CPU: 0 PID: 106 Comm: kworker/u8:2 Tainted: G W 5.4.60 #1
> > > > >> > [ 84.196669] Hardware name: Microsoft Corporation Virtual Machine/Virtual
> > Machine,
> > > > BIOS 090008 12/07/2018
> > > > >> > [ 84.196669] Workqueue: writeback wb_workfn (flush-8:0)
> > > > >> > [ 84.196669] Call Trace:
> > > > >> > [ 84.196669] dump_stack (lib/dump_stack.c:120)
> > > > >> > [ 84.196669] print_address_description.constprop.0 (mm/kasan/report.c:375)
> > > > >> > [ 84.196669] __kasan_report.cold (mm/kasan/report.c:507)
> > > > >> > [ 84.196669] kasan_report (arch/x86/include/asm/smap.h:71
> > > > mm/kasan/common.c:635)
> > > > >> > [ 84.196669] hyperv_flush_tlb_others (include/asm-generic/mshyperv.h:128
> > > > arch/x86/hyperv/mmu.c:112)
> > > > >> > [ 84.196669] flush_tlb_mm_range (arch/x86/include/asm/paravirt.h:68
> > > > arch/x86/mm/tlb.c:798)
> > > > >> > [ 84.196669] ptep_clear_flush (arch/x86/include/asm/tlbflush.h:586 mm/pgtable-
> > > > generic.c:88)
> > > > >> >
> > > > >> > Fixes: 0e4c88f37693 ("x86/hyper-v: Use cheaper
> > > > HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} hypercalls when possible")
> > > > >> > Cc: Vitaly Kuznetsov <vkuznets@...hat.com>
> > > > >> > Cc: stable@...nel.org
> > > > >> > Signed-off-by: Sasha Levin <sashal@...nel.org>
> > > > >> > ---
> > > > >> > arch/x86/hyperv/mmu.c | 4 +++-
> > > > >> > 1 file changed, 3 insertions(+), 1 deletion(-)
> > > > >> >
> > > > >> > diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
> > > > >> > index 5208ba49c89a9..b1d6afc5fc4a3 100644
> > > > >> > --- a/arch/x86/hyperv/mmu.c
> > > > >> > +++ b/arch/x86/hyperv/mmu.c
> > > > >> > @@ -109,7 +109,9 @@ static void hyperv_flush_tlb_others(const struct cpumask
> > > > *cpus,
> > > > >> > * must. We will also check all VP numbers when walking the
> > > > >> > * supplied CPU set to remain correct in all cases.
> > > > >> > */
> > > > >> > - if (hv_cpu_number_to_vp_number(cpumask_last(cpus)) >= 64)
> > > > >> > + int last = cpumask_last(cpus);
> > > > >> > +
> > > > >> > + if (last < num_possible_cpus() &&
> > hv_cpu_number_to_vp_number(last) >=
> > > > 64)
> > > > >> > goto do_ex_hypercall;
> > > > >>
> > > > >> In case 'cpus' can end up being empty (I'm genuinely suprised it can)
> > > >
> > > > I was just as surprised as you and spent the good part of a day
> > > > debugging this. However, a:
> > > >
> > > > WARN_ON(cpumask_empty(cpus));
> > > >
> > > > triggers at that line of code even though we check for cpumask_empty()
> > > > at the entry of the function.
> > >
> > > What does the call stack look like when this triggers? I'm curious about
> > > the path where the 'cpus' could be changing while the flush call is in
> > > progress.
> > >
> > > I wonder if CPUs could ever be added to the mask? Removing CPUs can
> > > be handled with some care because an unnecessary flush doesn't hurt
> > > anything. But adding CPUs has serious correctness problems.
> > >
> >
> > The cpumask_empty check is done before disabling irq. Is it possible
> > the mask is modified by an interrupt?
> >
> > If there is a reliable way to trigger this bug, we may be able to test
> > the following patch.
> >
> > diff --git a/arch/x86/hyperv/mmu.c b/arch/x86/hyperv/mmu.c
> > index 5208ba49c89a..23fa08d24c1a 100644
> > --- a/arch/x86/hyperv/mmu.c
> > +++ b/arch/x86/hyperv/mmu.c
> > @@ -66,11 +66,13 @@ static void hyperv_flush_tlb_others(const struct cpumask *cpus,
> > if (!hv_hypercall_pg)
> > goto do_native;
> >
> > - if (cpumask_empty(cpus))
> > - return;
> > -
> > local_irq_save(flags);
> >
> > + if (cpumask_empty(cpus)) {
> > + local_irq_restore(flags);
> > + return;
> > + }
> > +
> > flush_pcpu = (struct hv_tlb_flush **)
> > this_cpu_ptr(hyperv_pcpu_input_arg);
>
> This thread died out 3 months ago without any patches being taken.
> I recently hit the problem again at random, though not in a
> reproducible way.
>
> I'd like to take Wei Liu's latest proposal to check for an empty
> cpumask *after* interrupts are disabled. I think this will almost
> certainly solve the problem, and in a cleaner way than Sasha's
> proposal. I'd also suggest adding a comment in the code to note
> the importance of the ordering.
>
Sure. Let me prepare a proper patch.
Wei.
Powered by blists - more mailing lists