[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZmB0s7UsfSe90kqr@google.com>
Date: Wed, 5 Jun 2024 07:22:43 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Reinette Chatre <reinette.chatre@...el.com>
Cc: isaku.yamahata@...el.com, pbonzini@...hat.com, erdemaktas@...gle.com,
vkuznets@...hat.com, vannapurve@...gle.com, jmattson@...gle.com,
mlevitsk@...hat.com, xiaoyao.li@...el.com, chao.gao@...el.com,
rick.p.edgecombe@...el.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V5 4/4] KVM: selftests: Add test for configure of x86 APIC
bus frequency
On Tue, Jun 04, 2024, Reinette Chatre wrote:
> > > +/*
> > > + * Pick one convenient value, 1.5GHz. No special meaning and different from
> > > + * the default value, 1GHz.
> >
> > I have no idea where the 1GHz comes from. KVM doesn't force a default TSC, KVM
> > uses the underlying CPU's frequency. Peeking further ahead, I don't understand
> > why this test sets KVM_SET_TSC_KHZ. That brings in a whole different set of
> > behavior, and that behavior is already verified by tsc_scaling_sync.c.
> >
> > I suspect/assume this test forces a frequency so that it can hardcode the math,
> > but (a) that's odd and (b) x86 selftests really should provide a udelay() so that
> > goofy stuff like this doesn't end up in random tests.
>
> I believe the "default 1GHz" actually refers to the default APIC bus frequency and
> the goal was indeed to (a) make the TSC frequency different from APIC bus frequency,
> and (b) make math easier.
>
> Yes, there is no need to use KVM_SET_TSC_KHZ. An implementation of udelay() would
> require calibration and to make this simple for KVM I think we can just use
> KVM_GET_TSC_KHZ. For now I continue to open code this (see later) since I did not
> notice similar patterns in existing tests that may need a utility. I'd be happy
> to add a utility if the needed usage pattern is clear since the closest candidate
> I could find was xapic_ipi_test.c that does not have a nop loop.
Please add a utility. ARM and RISC-V already implement udelay(), and this isn't
the first test that's wanted udelay() functionality. At the very least, it'd be
nice to have for debug, e.g. to be able to insert artificial delay if a test is
failing due to a suspected race suspected. I.e. this is likely an "if you build
it, they will come" situations.
> > Unless I'm misremembering, the timer still counts when the LVT entry is masked
> > so just mask the IRQ in the LVT. Or rather, keep the entry masked in the LVT.
>
> hmmm ... I do not think this is specific to LVT entry but instead an attempt
> to ignore all maskable external interrupt that may interfere with the test.
I doubt it. And if that really is the motiviation, that's a fools errand. This
is _guest_ code. Disabling IRQs in the guest doesn't prevent host IRQs from being
delivered, it only blocks virtual IRQs. And KVM selftests guests should never
receive virtual IRQs unless the test itself explicitly sends them.
> LVT entry is prevented from triggering because if the large configuration value.
Yes and no. A large configuration value _should_ avoid a timer IRQ, but the
entire point of this test is to verify KVM correctly emulates the timer. If this
test does its job and finds a KVM bug that causes the timer to prematurely expire,
then unmasking the LVT entry will generate an unexpected IRQ.
Of course, the test doesn't configure a legal vector so the IRQ will never be
delivered. We could fix that problem, but then a test failure would manifest as
a hard-to-triage unexpected event, compared to an explicit TEST_ASSERT() on the
timer value.
That said, I'm not totally pposed to letting the guest die if KVM unexpectedly
injects a timer IRQ, e.g. if all is well, it's a cheap way to provide a bit of
extra coverage. On the other hand, masking the interrupt is even simpler, and
the odds of false pass are low.
Powered by blists - more mailing lists