lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240313-goat-of-inescapable-prowess-4f22ad@carbon>
Date: Wed, 13 Mar 2024 15:41:06 -0700
From: Matt Wilson <msw@...ux.com>
To: Vegard Nossum <vegard.nossum@...cle.com>
Cc: Matt Wilson <msw@...ux.com>, Jonathan Corbet <corbet@....net>,
	cve@...nel.org, linux-kernel@...r.kernel.org,
	linux-doc@...r.kernel.org, security@...nel.org,
	Kees Cook <keescook@...omium.org>,
	Konstantin Ryabitsev <konstantin@...uxfoundation.org>,
	Krzysztof Kozlowski <krzk@...nel.org>,
	Lukas Bulwahn <lukas.bulwahn@...il.com>,
	Sasha Levin <sashal@...nel.org>, Lee Jones <lee@...nel.org>,
	Pavel Machek <pavel@...x.de>, John Haxby <john.haxby@...cle.com>,
	Marcus Meissner <meissner@...e.de>,
	Vlastimil Babka <vbabka@...e.cz>,
	Roxana Bradescu <roxabee@...omium.org>,
	Solar Designer <solar@...nwall.com>, Matt Wilson <msw@...zon.com>
Subject: Re: [RFC PATCH 2/2] doc: distros: new document about assessing
 security vulnerabilities

On Wed, Mar 13, 2024 at 02:11:00PM +0100, Vegard Nossum wrote:
> 
> On 11/03/2024 18:59, Matt Wilson wrote:
> > There have been occurrences where a CVSSv3.1 score produced by a
> > vendor of software are ignored when the score in the NVD is higher
> > (often 9.8 due to NIST's standard practice in producing CVSS scores
> > from "Incomplete Data" [1]). I don't know that harmonizing the
> > practice of producing CVSSv3.1 base scores across Linux vendors will
> > address the problem unless scores that are made available in the NVD
> > match.
> 
> That link actually says they would use 10.0 for CVEs without enough
> detail provided by the filer/CNA (as I understood it).

Indeed, the web page says that it would be 10.0 in cases where there
is no detail about the weakness. In practice, the score tends to come
out as 9.8 because the base score vectors are more often
   CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
and not
   CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H
(which would be a 10.0 score)

What's the key difference between 9.8 and 10.0 in the CVSSv3.1 system?
Scope:Unchanged. In CVSSv4 such a weakness would likely be scored
   CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N

With a CVSS-B of 9.3 (Critical). What does any of this information
tell a practitioner about what actions are warranted in light of the
presence of a software weakness in their environment? Not much, from
my personal perspective.

> I wonder what their strategy would be for all of these new kernel CVEs
> -- should we expect to see 10.0 or 9.8 for all of them, do you know? I
> assume they do NOT have people to evaluate all these patches in detail.

At present, and since mid-February, NIST is not enriching new CVEs
that have been allocated with CVSS base scores or other additional
data. Their website displays the following banner text:

    NIST is currently working to establish a consortium to address
    challenges in the NVD program and develop improved tools and
    methods. You will temporarily see delays in analysis efforts
    during this transition. We apologize for the inconvenience and ask
    for your patience as we work to improve the NVD program.

I expect the path forward will be a topic of discussion among
attendees at the upcoming CVE/FIRST VulnCon 2024 & Annual CNA Summit [1].

> > If the guide has something to say about CVSS, I (speaking only for
> > myself) would like for it to call out the hazards that the system
> > presents. I am not convinced that CVSS can be applied effectively in
> > the context of the kernel, and would rather this section call out all
> > the reasons why it's a fool's errand to try.
> 
> I also heard this concern privately from somebody else.
> 
> I am considering replacing the CVSS part with something else. To be
> honest, the part that really matters to reduce duplicated work for
> distros is the reachability analysis (including the necessary conditions
> to trigger the bug) and the potential outcomes of triggering the bug.
> Once you have those, scoring for impact, risk, etc. can be done fairly
> easily (at least more easily) in different systems and taking
> distro-specific constraints (configuration, mitigations, etc.) into account.

Distros are not the only downstream consumer of Linux with this
need. Arguably the need is even greater for some consumer electronics
applications that may not have the same over-the-air update
capabilities as something like an Android phone. This is a frequently,
and increasingly, discussed topic in Embedded Linux conferences. See,
for example [2, 3].

I think that one coarse-grained "reachability" analysis is CONFIG_*
based matching [4, 5], and that's something that not necessarily
directly reusable across distros or other downstream users of Linux
(as their Kconfigs aren't necessarily the same). But perhaps some
community maintained tooling to automate that analysis would be
useful.

Many in the security community are rightly skeptical about
"reachability analysis" given the possibility of constructing "weird
machines" [6] from executable code that is present but not normally
reached. But if you can confidently attest that the weakness is not
present in a produced binary, you can safely say that the weakness is
not a factor, and poses no legitimate security risk.

Your current draft security assessment guide says:
> A distro may wish to start by checking whether the file(s) being
> patched are even compiled into their kernel; if not, congrats!
> You're not vulnerable and don't really need to carry out a more
> detailed analysis.

One research group [7] found that in a study of 127 router firmware
images 68% of all naïve version based CVE matches were false-positives
that could be filtered out, mainly through determining that the code
that contains a weakness was never compiled.

I think this low hanging fruit is ripe for picking, and deserves some
more content in a weakness assessment guide.

(P.S., "weakness" is an intentional word choice)

--msw

[1] https://www.first.org/conference/vulncon2024/
[2] https://elinux.org/images/0/0a/Open-Source-CVE-Monitoring-and-Management-V3.pdf
[3] https://www.timesys.com/security/evaluating-vulnerability-tools-embedded-linux-devices/
[4] https://ossjapan2022.sched.com/event/1D14m/config-based-cve-matching-for-linux-kernel-takuma-kawai-miraxia-edge-technology-corporation
[5] https://www.miraxia.com/en/engineers-blog/config-based-cve-matching-for-linux-kernel/
[6] https://ieeexplore.ieee.org/document/8226852
[7] https://arxiv.org/pdf/2209.05217.pdf

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ