lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKMK7uHb2SRBnsNniV4SPHhe-XB+CsWkt0DjE9-vu1t_eJAWxg@mail.gmail.com>
Date:   Fri, 25 Jan 2019 08:43:25 +0100
From:   Daniel Vetter <daniel.vetter@...ll.ch>
To:     Olof Johansson <olof@...om.net>
Cc:     Dave Airlie <airlied@...il.com>,
        Oded Gabbay <oded.gabbay@...il.com>,
        Jerome Glisse <jglisse@...hat.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        LKML <linux-kernel@...r.kernel.org>, ogabbay@...ana.ai,
        Arnd Bergmann <arnd@...db.de>, fbarrat@...ux.ibm.com,
        Andrew Donnellan <andrew.donnellan@....ibm.com>
Subject: Re: [PATCH 00/15] Habana Labs kernel driver

On Fri, Jan 25, 2019 at 1:14 AM Olof Johansson <olof@...om.net> wrote:
>
> On Thu, Jan 24, 2019 at 2:23 AM Dave Airlie <airlied@...il.com> wrote:
> >
> > > I know I won't be able to convince you but I want to say that I think
> > > your arguments for full userspace open source are not really
> > > technical.
> >
> > There is more to keeping a kernel going than technical argument unfortunately.
> >
> > I guess the question for Greg, Olof etc, is do we care about Linux the
> > kernel, or Linux the open source ecosystem, if the former, these sort
> > of accelerator shim drivers are fine, useless to anyone who doesn't
> > have all the magic hidden userspace, and impossible to support for
> > anyone else, if the latter, we should leave the cost of maintenance to
> > the company benefiting from it and leave maintaining it out of tree.
>
> As mentioned in my reply to Daniel, I think we've got a history of
> being pragmatic and finding reasonable trade-offs of what can be open
> and what can be closed. For example, if truly care about open source
> ecosystem, drivers that require closed firmware should also be
> refused.

Firmware has traditionally been different since usually it's looked
down, doesn't do much wrt functionality (dumb fifo scheduling at best,
not really power management) and so could be reasonably shrugged off
as "it's part of hw". If you care about the open graphics ecosystem,
i.e. your ability to port the stack to new cpu architectures, new
window systems (e.g. android -> xorg, or xorg -> android, or something
entirely new like wayland), new, more efficient client interface
(vulkan is a very new fad), then having a closed firmware is not going
to be a problem. Closed compiler, closed runtime, closed anything else
otoh is a serious practical pain.

Unfortunately hw vendors seem to have realized that we (overall
community of customers, distro, upstream) are not insisting on open
firmware, so they're moving a lot of "valuable sauce" (no really, it's
not) into the firmware. PM governors, cpu scheduling algorithms, that
kind of stuff. We're not pleased, and there's lots of people doing the
behind the scenes work to fix it. One practical problem is that even
if we've demonstrated that r/e'ing a uc is no bigger challenge than
anything, there's usually this pesky issue with signatures. So we
can't force the vendors like we can with the userspace side. Otherwise
nouveau would have completely open firmware even for latest chips
(like it has for olders).

> > Simple question like If I plug your accelerator into Power or ARM64,
> > where do I get the port of your userspace to use it?
>
> Does demanding complete open userspace get us closer to that goal in
> reality? By refusing to work with people to enable their hardware,
> they will still ship their platforms out of tree, using DKMS and all
> the other ways of getting kernel modules installed to talk to the
> hardware. And we'd be no closer.
>
> In the end, they'd open up their userspace when there's business
> reasons to do so. It's well-known how to work around refusal from us
> to merge drivers by now, so it's not much leverage in that area.

Correct. None of the hw vendors had a business reason to open source
anything unforunately. Yes, eventually customers started demanding
open source and treatening to buy the competition, but this only works
if you have multiple reasonably performant & conformant stacks for
different vendors. The only way to get these is to reverse engineer
them.

Now reverse-engineering is a major pain in itself (despite all the
great tooling gpu folks developed over the past 10 years to convert it
from a black art to a repeatable engineering excercise), but if you
additionally prefer the vendors closed stack (which you do by allowing
to get them to get merged) the r/e'd stack has no chance. And there is
not other way to get your open source stack. I can't really go into
all the details of the past 15+ of open source gpus, but without the
pressure of other r/e'ed stacks and the pressure of having stacks for
competitiors (all made possible through aggressive code sharing) we
would have 0 open source gfx stacks. All the ones we have either got
started with r/e first (and eventually the vendor jumped on board) or
survived through r/e and customer efforts (because the vendor planned
to abandon it). Another part of this is that we accept userspace only
when it's the common upstream (if there is one), to prevent vendors
closing down their stacks gradually.

So yeah I think by not clearly preferring open source over
stacks-with-blobs (how radically you do that is a bit a balance act in
the end, I think we've maxed out in drivers/gpu on what's practically
possible) you'll just make sure that there's never going to be a
serious open source stack.

> > I'm not the final arbiter on this sort of thing, but I'm definitely
> > going to make sure that anyone who lands this code is explicit in
> > ignoring any experience we've had in this area and in the future will
> > gladly accept "I told you so" :-)
>
> There's only one final arbiter on any inclusion to code to the kernel,
> but we tend to sort out most disagreements without going all the way
> there.
>
> I still think engaging has a better chance of success than rejecting
> the contributions, especially with clear expectations w.r.t. continued
> engagement and no second implementations over time. In all honestly,
> either approach might fail miserably.

This is maybe not clear, but we still work together with the blob
folks as much as possible, for demonstration: nvidia sponsored XDC
this year, and nvidia engineers have been regularly presenting there.
Collaboration happens around the driver interfaces, like loaders (in
userspace), buffer sharing, synchronization, negotiation of buffer
formats and all that stuff. Do as much enganging as possible, but if
you give preferrential treatment to the closed stacks over the open
ones (and by default the vendor _always_ gives you a closed stack, or
as closed as possible, there's just no business case for them to open
up without a customer demanding it and competition providing it too),
you will end up with a closed stack for a very long time, maybe
forever.

Even if you insist on an open stack it's going to take years, since
the only way to get there is lots of r/e, and you need to have at
least 2 stacks or otherwise the customers can't walk away from the
negotiation table. So again from gfx experience: The only way to get
open stacks is solid competition by open stacks, and customers/distros
investing ridiculous amounts of money to r/e the chips and write these
open&cross vendor stacks. The business case for vendors to open source
their stacks is just not there. Not until they can't sell their chips
any other way anymore (nvidia will embrace open stacks as soon as
their margins evaporate, not a second earlier, like all the others
before them). Maybe at the next hallway track we need to go through a
few examples of what all happened and is still happening in the
background (here's maybe not a good idea).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ