[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210822173226.ddekpq7jrjwhsguj@liuwe-devbox-debian-v2>
Date: Sun, 22 Aug 2021 17:32:26 +0000
From: Wei Liu <wei.liu@...nel.org>
To: David Mozes <david.mozes@...k.us>
Cc: Wei Liu <wei.liu@...nel.org>, David Moses <mosesster@...il.com>,
Michael Kelley <mikelley@...rosoft.com>,
תומר אבוטבול
<tomer432100@...il.com>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86/hyper-v: guard against cpu mask changes in
hyperv_flush_tlb_others()
On Sun, Aug 22, 2021 at 04:25:19PM +0000, David Mozes wrote:
> This is not visible since we need a very high load to reproduce.
> We have tried a lot but can't achieve the desired load
> On our kernel with less load, it is not reproducible as well.
There isn't much upstream can do if there is no way to reproduce the
issue with an upstream kernel.
You can check all the code paths which may modify cpumask and analyze
them. KCSAN may be useful too, but that's only available in 5.8 and
later.
Thanks,
Wei.
>
> -----Original Message-----
> From: Wei Liu <wei.liu@...nel.org>
> Sent: Sunday, August 22, 2021 6:25 PM
> To: David Mozes <david.mozes@...k.us>
> Cc: David Moses <mosesster@...il.com>; Wei Liu <wei.liu@...nel.org>; Michael Kelley <mikelley@...rosoft.com>; תומר אבוטבול <tomer432100@...il.com>; linux-hyperv@...r.kernel.org; linux-kernel@...r.kernel.org
> Subject: Re: [PATCH] x86/hyper-v: guard against cpu mask changes in hyperv_flush_tlb_others()
>
> On Thu, Aug 19, 2021 at 07:55:06AM +0000, David Mozes wrote:
> > Hi Wei ,
> > I move the print cpumask to other two places after the treatment on the empty mask see below
> > And I got the folwing:
> >
> >
> > Aug 19 02:01:51 c-node05 kernel: [25936.562674] Hyper-V: ERROR_HYPERV2: cpu_last=
> > Aug 19 02:01:51 c-node05 kernel: [25936.562686] WARNING: CPU: 11 PID: 56432 at arch/x86/include/asm/mshyperv.h:301 hyperv_flush_tlb_others+0x23f/0x7b0
> >
> > So we got empty on different place on the code .
> > Let me know if you need further information from us.
> > How you sagest to handle this situation?
> >
>
> Please find a way to reproduce this issue with upstream kernels.
>
> Thanks,
> Wei.
Powered by blists - more mailing lists