[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210323001303.GA3092649@gmail.com>
Date: Tue, 23 Mar 2021 01:13:03 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Michael Kelley <mikelley@...rosoft.com>
Cc: Xu Yihang <xuyihang@...wei.com>, KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"bp@...en8.de" <bp@...en8.de>, "x86@...nel.org" <x86@...nel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"johnny.chenyi@...wei.com" <johnny.chenyi@...wei.com>,
"heying24@...wei.com" <heying24@...wei.com>
Subject: Re: [PATCH -next] x86: Fix unused variable 'msr_val' warning
* Michael Kelley <mikelley@...rosoft.com> wrote:
> From: Ingo Molnar <mingo.kernel.org@...il.com> Sent: Monday, March 22, 2021 2:08 PM
> >
> > * Xu Yihang <xuyihang@...wei.com> wrote:
> >
> > > Fixes the following W=1 kernel build warning(s):
> > > arch/x86/hyperv/hv_spinlock.c:28:16: warning: variable 'msr_val' set but not used [-
> > Wunused-but-set-variable]
> > > unsigned long msr_val;
> > >
> > > As Hypervisor Top-Level Functional Specification states in chapter 7.5 Virtual Processor
> > Idle Sleep State, "A partition which possesses the AccessGuestIdleMsr privilege (refer to
> > section 4.2.2) may trigger entry into the virtual processor idle sleep state through a read to
> > the hypervisor-defined MSR HV_X64_MSR_GUEST_IDLE". That means only a read is
> > necessary, msr_val is not uesed, so __maybe_unused should be added.
> > >
> > > Reference:
> > >
> > > https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs
> > >
> > > Reported-by: Hulk Robot <hulkci@...wei.com>
> > > Signed-off-by: Xu Yihang <xuyihang@...wei.com>
> > > ---
> > > arch/x86/hyperv/hv_spinlock.c | 2 +-
> > > 1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
> > > index f3270c1fc48c..67bc15c7752a 100644
> > > --- a/arch/x86/hyperv/hv_spinlock.c
> > > +++ b/arch/x86/hyperv/hv_spinlock.c
> > > @@ -25,7 +25,7 @@ static void hv_qlock_kick(int cpu)
> > >
> > > static void hv_qlock_wait(u8 *byte, u8 val)
> > > {
> > > - unsigned long msr_val;
> > > + unsigned long msr_val __maybe_unused;
> > > unsigned long flags;
> >
> > Please don't add new __maybe_unused annotations to the x86 tree -
> > improve the flow instead to help GCC recognize the initialization
> > sequence better.
> >
> > Thanks,
> >
> > Ingo
>
> Could you elaborate on the thinking here, or point to some written
> discussion? I'm just curious. In this particular case, it's not a problem
> with the flow or gcc detection. This code really does read an MSR and
> ignore that value that is read, so it's not clear how gcc would ever
> figure out that's OK.
Yeah, so the canonical way to signal that the msr_val isn't used would
be to rewrite this as:
if (READ_ONCE(*byte) == val) {
unsigned long msr_val;
rdmsrl(HV_X64_MSR_GUEST_IDLE, msr_val);
(void)msr_val;
}
(Also see the patch below - untested.)
This makes it abundantly clear that the rdmsr() msr_val return value
is not 'maybe' unused, but totally intentionally skipped.
Thanks,
Ingo
arch/x86/hyperv/hv_spinlock.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
index f3270c1fc48c..7d948513ed42 100644
--- a/arch/x86/hyperv/hv_spinlock.c
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -25,7 +25,6 @@ static void hv_qlock_kick(int cpu)
static void hv_qlock_wait(u8 *byte, u8 val)
{
- unsigned long msr_val;
unsigned long flags;
if (in_nmi())
@@ -48,8 +47,14 @@ static void hv_qlock_wait(u8 *byte, u8 val)
/*
* Only issue the rdmsrl() when the lock state has not changed.
*/
- if (READ_ONCE(*byte) == val)
+ if (READ_ONCE(*byte) == val) {
+ unsigned long msr_val;
+
rdmsrl(HV_X64_MSR_GUEST_IDLE, msr_val);
+
+ (void)msr_val;
+ }
+
local_irq_restore(flags);
}
Powered by blists - more mailing lists