lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALs-Hss8KNV9zkEHujVmzb7Rf8Zp9OGkDF9Fj8BCjQuFDYaHHA@mail.gmail.com>
Date:   Mon, 28 Aug 2023 10:18:24 -0700
From:   Evan Green <evan@...osinc.com>
To:     Conor Dooley <conor@...nel.org>
Cc:     Palmer Dabbelt <palmer@...osinc.com>,
        Anup Patel <apatel@...tanamicro.com>,
        Albert Ou <aou@...s.berkeley.edu>,
        Heiko Stuebner <heiko@...ech.de>,
        Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        Conor Dooley <conor.dooley@...rochip.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Bagas Sanjaya <bagasdotme@...il.com>,
        Paul Walmsley <paul.walmsley@...ive.com>,
        linux-riscv@...ts.infradead.org,
        Andrew Jones <ajones@...tanamicro.com>
Subject: Re: [PATCH v5] RISC-V: Show accurate per-hart isa in /proc/cpuinfo

On Mon, Aug 28, 2023 at 9:53 AM Conor Dooley <conor@...nel.org> wrote:
>
> On Mon, Aug 28, 2023 at 09:24:03AM -0700, Evan Green wrote:
> > On Sat, Aug 26, 2023 at 2:56 AM Conor Dooley <conor@...nel.org> wrote:
> > >
> > > On Sat, Aug 26, 2023 at 12:26:25AM +0100, Conor Dooley wrote:
> > > > On Fri, Aug 25, 2023 at 04:11:38PM -0700, Evan Green wrote:
> > > > > In /proc/cpuinfo, most of the information we show for each processor is
> > > > > specific to that hart: marchid, mvendorid, mimpid, processor, hart,
> > > > > compatible, and the mmu size. But the ISA string gets filtered through a
> > > > > lowest common denominator mask, so that if one CPU is missing an ISA
> > > > > extension, no CPUs will show it.
> > > > >
> > > > > Now that we track the ISA extensions for each hart, let's report ISA
> > > > > extension info accurately per-hart in /proc/cpuinfo. We cannot change
> > > > > the "isa:" line, as usermode may be relying on that line to show only
> > > > > the common set of extensions supported across all harts. Add a new "hart
> > > > > isa" line instead, which reports the true set of extensions for that
> > > > > hart.
> > > > >
> > > > > Signed-off-by: Evan Green <evan@...osinc.com>
> > > > > Reviewed-by: Andrew Jones <ajones@...tanamicro.com>
> > > >
> > > > > Reviewed-by: Conor Dooley <conor.dooley@...rochip.com>
> > > >
> > > > Can you drop this if you repost?
> >
> > Will do.
> >
> > > >
> > > > > +"isa" vs "hart isa" lines in /proc/cpuinfo
> > > > > +------------------------------------------
> > > > > +
> > > > > +The "isa" line in /proc/cpuinfo describes the lowest common denominator of
> > > > > +RISC-V ISA extensions recognized by the kernel and implemented on all harts. The
> > > > > +"hart isa" line, in contrast, describes the set of extensions recognized by the
> > > > > +kernel on the particular hart being described, even if those extensions may not
> > > > > +be present on all harts in the system.
> > > >
> > > > > In both cases, the presence of a feature
> > > > > +in these lines guarantees only that the hardware has the described capability.
> > > > > +Additional kernel support or policy control changes may be required before a
> > > > > +feature is fully usable by userspace programs.
> > > >
> > > > I do not think that "in both cases" matches the expectations of
> > > > userspace for the existing line. It's too late at night for me to think
> > > > properly, but I think our existing implementation does work like you
> > > > have documented for FD/V. I think I previously mentioned that it could
> > > > misreport things for vector during the review of the vector series but
> > > > forgot about it until now.
> > >
> > > I went and checked, and yes it does currently do that for vector. I
> > > don't think that that is what userspace would expect, that Google
> > > cpu_features project for example would draw incorrect conclusions.
> >
> > I'm lost, could you explain a little more?
>
> There (may be/)are userspace programs that will interpret the appearance
> of extensions in cpuinfo as meaning they can be used without performing
> any further checks.
>
> > My goal was to say that
> > there's no blanket guarantee that the feature is 100% ready to go for
> > userspace just because it's seen here.
>
> Right. I was agreeing that this is true, but it is also not how some
> userspace programs have interpreted things. Consider a platform & kernel
> that support the V extension but vector has not been enabled by default
> or by early userspace. If someone cats cpuinfo, they'll see v there, but
> it won't be usable. That Google cpu_features project (or a punter) may
> then assume they can use it, as that's been the case so far in general*.
>
> The caveat producing the * being that the same problem actually exists
> for F/D too AFAICT, but it's likely that nobody really encountered it
> as they didn't build non-FP userspaces & then try to use FP in some
> userspace applications.
>
> > For some extensions, it may in
> > fact end up meaning just that (hence the "additional ... may be
> > required" rather than "is required").
>
> > This is true for FD (maybe,
> > depending on history?),
>
> AFAICT, it's not true for FD. The FPU config option not being set, or
> either of F and D being missing will lead to unusable extensions
> appearing.

Ah ok.

>
> > or extensions whose minimal/zero kernel
> > support was unconditionally added at the same time as its parsing for
> > it. But it's not true solely by virtue of being in /proc/cpuinfo. In
> > other words, I'm trying to establish the floor of what /proc/cpuinfo
> > guarantees, without fully specifying the ceiling.
>
> > Are you saying that
> > we need to spell out the guarantees for each extension?
>
> No, I don't want that!
>
> > Or are you
> > saying the floor I've defined in general is incorrect or insufficient?
>
> I think the floor that you have defined is probably misleading to users.
> It's also the floor that has existed for quite a while, so this might be
> a case of the userspace devs messing up due to an absence of any
> explanation of what to do here.
> Things will get abhorrently messy if we try to do what these userspace
> programs expect, and I don't think we should go there. We just need to
> bear in mind that the behaviour we have & the behaviour that you are
> documenting flys in the face of what some userspace expects.

Thanks, I think I understand now. You're saying the floor I'm defining
might surprise some users, who were expecting the floor to be "fully
enabled and ready to party". Given there was no documentation about it
before, and this documentation is consistent with what we actually do
(and there seems to be consensus this is a maintainable position to
hold), can we just tell those users they're holding it wrong?

>
> > I'm also open to direct suggestions of wording if you've got something
> > in mind :)
>
> Someone mentioned it recently, but it really is starting to feel more
> and more like lscpu should grow support for hwprobe and funnel people
> into using that instead of /proc/cpuinfo when all they want is to see
> what their hardware can do.

Maybe for the fiddly microarchitectural bits, yeah. But I'd think our
newly proposed documentation for /proc/cpuinfo of keeping it closer to
what the hardware can do would suit the lscpu folks' mission well. (In
ChromeOS at least, we didn't have lscpu, but snarfed /proc/cpuinfo
directly into feedback reports that consented to sending along system
info). Really I'd think it's the application/library writers who want
to know "am I ready to go right now" are who we should be pushing to
use hwprobe, since we can define those bits to be as specific as we
want (eg V is on AND it's a full moon, so go for it).

Depending on your thoughts on this, if there are changes requested on
this patch, let me know what they are.
-Evan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ