lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <69fac460-ff29-ca76-d9a8-d2529cf02fa2@redhat.com>
Date:   Thu, 16 Jun 2022 12:37:13 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Sean Christopherson <seanjc@...gle.com>,
        Like Xu <like.xu.linux@...il.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH 1/2] KVM: vmx, pmu: accept 0 for absent MSRs when
 host-initiated

On 6/15/22 20:52, Sean Christopherson wrote:
> On Thu, Jun 02, 2022, Like Xu wrote:
>> I actually agree and understand the situation of maintainers/reviewers.
>> No one wants to maintain flawed code, especially in this community
>> where the majority of previous contributors disappeared after the code
>> was merged in. The existing heavy maintenance burden is already visible.

I don't think this is true.  I think it's relatively rare for 
contributors to disappear.

>> Thus we may have a maintainer/reviewers scalability issue. Due to missing
>> trust, competence or mastery of rules, most of the patches sent to the list
>> have no one to point out their flaws.
> 
> Then write tests and run the ones that already exist.  Relying purely on reviewers
> to detect flaws does not and cannot scale.  I agree that we currently have a
> scalability issue, but I have different views on how to improve things.
> 
>> I have privately received many complaints about the indifference of our
>> community, which is distressing.

You're welcome to expand on these complaints.  But I suspect that a lot 
of these would come from people that have been told "review other 
people's work", "write tests" and/or "you submitted a broken patch" before.

"Let's try to accept" is basically what I did for PEBS and LBR, both of 
which I merged basically out of guilt after a little-more-than-cursory 
review.  It turns out that both of them were broken in ways that weren't 
subtle at all; and as a result, other work already queued to 5.19 had to 
be bumped to 5.20.

Honestly I should have complained and un-merged them right after seeing 
the msr.flat failure.  Or perhaps I should have just said "write tests 
and then I'll consider the series", but I "tried to accept" and we can 
already see it was a failure.

>> Obviously, "try to accept" is not a 100% commitment and it will fail with high
>> probability, but such a stance (along with standard clarifications and requirements)
>> from reviewers and maintainers will make the contributors more concerned,
>> attract potential volunteers, and focus the efforts of our nominated reviewers.

If it "fails with high probability", all that happened was a waste of 
time for everyone involved.  Including the submitter who has waited for 
weeks for a reviews only to be told "test X fails".

> I completely agree on needing better transparency for the lifecycle of patches
> going through the KVM tree.  First and foremost, there need to be formal, documented
> rules for the "official" kvm/* branches, e.g. everything in kvm/queue passes ABC
> tests, everything in kvm/next also passes XYZ tests.  That would also be a good
> place to document expectations, how things works, etc...

Agreed.  I think this is a more general problem with Linux development 
and I will propose this for maintainer summit.

But again, the relationship between contributors and maintainers should 
be of mutual benefit.  Rules help contributors, but contributors should 
themselves behave and not throw broken patches at maintainers.  And 
again, guess what the best way is to tell maintainers your patch is not 
broken?  Include a test.  It shows that you are paying attention.

> I fully realize that writing tests is not glamorous, and that some of KVM's tooling
> and infrastructure is lacking,

I wouldn't say lacking.  Sure it's complicated, but between selftests 
and kvm-unit-tests the tools *are* there.  selftests that allow you to 
test migration at an instruction boundary, for example, are not that 
hard to write and were very important for features such as nested state 
and AMX.  They're not perfect, but they go a long way towards giving 
confidence in the code; and it's easier to catch weird ioctl policies 
from reviewing comprehensive tests than from reviewing the actual KVM code.

We're not talking of something like SEV or TDX here, we're talking about 
very boring MSR emulation and only slightly less boring PMU passthrough.

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ