lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zuh32evWMcs8hTAM@google.com>
Date: Mon, 16 Sep 2024 11:24:25 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [GIT PULL] KVM: x86 pull requests for 6.12

On Sun, Sep 15, 2024, Paolo Bonzini wrote:
> On Sat, Sep 14, 2024 at 4:54 PM Paolo Bonzini <pbonzini@...hat.com> wrote:
> >
> > On Sat, Sep 14, 2024 at 3:13 AM Sean Christopherson <seanjc@...gle.com> wrote:
> > > There's a trivial (and amusing) conflict with KVM s390 in the selftests pull
> > > request (we both added "config" to the .gitignore, within a few days of each
> > > other, after the goof being around for a good year or more).
> > >
> > > Note, the pull requests are relative to v6.11-rc4.  I got a late start, and for
> > > some reason thought kvm/next would magically end up on rc4 or later.
> > >
> > > Note #2, I had a brainfart and put the testcase for verifying KVM's fastpath
> > > correctly exits to userspace when needed in selftests, whereas the actual KVM
> > > fix is in misc.  So if you run KVM selftests in the middle of pulling everything,
> > > expect the debug_regs test to fail.
> >
> > Pulled all, thanks. Due to combination of being recovering from flu +
> > preparing to travel I will probably spend not be able to run tests for
> > a few days, but everything should be okay for the merge window.
> 
> Hmm, I tried running tests in a slightly non-standard way (compiling
> the will-be-6.12 code on a 6.10 kernel and installing the module)
> because that's what I could do for now, and I'm getting system hangs
> in a few tests. The first ones that hung were
> 
> hyperv_ipi
> hyperv_tlb_flush

This one failing gives me hope that it's some weird combination of 6.10 and the
for-6.12 code.  Off the top of my head, I can't think of any relevant changes.

FWIW, I haven't been able to reproduce any failures with kvm/next+kvm-x86/next,
on AMD or Intel.

> xapic_ipi_test
> 
> And of course, this is on a machine that doesn't have serial
> console... :( I think for now I'll push the non-x86 stuff to kvm/next
> and then either bisect or figure out how to run tests normally.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ