lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CADXzsihPteW5_gjg94AVd=Fa1fgAsdAR7rvz+4VK_fZMHUtjfw@mail.gmail.com>
Date:   Mon, 21 Aug 2023 19:18:28 -0700
From:   Raj J Putari <jmaharaj2013@...il.com>
To:     "Enrico Weigelt, metux IT consult" <info@...ux.net>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        dri-devel@...ts.freedesktop.org
Subject: Re: using gpu's to accelerate the linux kernel

nice read!

i was thinking of a kernel module that does stuff like offload some
work to the gpu.. like we can have like gpuaccel.ko that does stuff
like wrap gpu calls to stuff like compiles or low level stuff like
heavy computes, just looked up a few apis and it looks like opencl and
cuda are meant for 3d computation, so some way to access the gpus
compute internals would take some hacking, not sure if its possible

it would be awesome if we can offload some compilation from stuff like
cc and c++ to the gpu, if the technology is available (maybe with
amd?)


On Mon, Aug 21, 2023 at 7:21 AM Enrico Weigelt, metux IT consult
<info@...ux.net> wrote:
>
> On 27.04.23 12:51, Raj J Putari wrote:
>
> > id write it but im an amatuer and i dont have time to read the kernel
> > source and experiment, we're talking about nvidia and amd video cards
> > assisting in processing heavy data.
>
> obviously not w/ NVidia (except for some old, already reverse-engineered
> gpus), since Nvidia is doing all they can hiding the necessary specs
> to write drivers from us.
>
> Forget about Nvidia. Never ever waste a single penny on that.
>
> > lets say youre compiling a kernel, you can write optimizations into
> > the kernel through a cuda module and offload cpu data directly to the
> > gpu using opencl or cuda or what amd supplies,
>
> cuda, opencl, etc, are *userland* *library* APIs. They don't work inside
> the kernel. One had to write something similar *inside* the kernel
> (which is working very differently from userland). Also consider that
> the most complex stuff (eg. creating command streams) is done in
> userland (eg. mesa's pipe drivers, ...), the kernel is just responsible
> for some more lowlevel things like buffer management, modesetting, etc.
>
>
> If you wanna go that route, you'd have to create something like Mesa's
> Gallium inside the kernel. Besides that this is a pretty huge task
> (and you'd have to reimplement lots of drivers), you'd also have to
> find a way to get a good performance when calling from userland (note
> that syscalls, even ioctls, etc, are much more expensive than just
> plain library function calls inside the same process). Probably comes
> down to using some bytecode (tgsi ?) and loading it somewhat similar
> to bpf.
>
>
> Assuming that's really up and running one day, it indeed could solve
> other problems, eg. clear separation between containers and hosts
> (for now, containers still needs the userland parts of gpu drivers
> for the corresponding host hardware).
>
> But be warned: this is a huge endavour, *a lot* work to do and hard
> to get it right.
>
>
> OTOH, I'm yet sceptical whether there's much practical use cases for
> using GPUs by the kernel *itself*. What exactly do you have in mind
> here ?
>
>
> --mtx
>
> --
> ---
> Hinweis: unverschlüsselte E-Mails können leicht abgehört und manipuliert
> werden ! Für eine vertrauliche Kommunikation senden Sie bitte ihren
> GPG/PGP-Schlüssel zu.
> ---
> Enrico Weigelt, metux IT consult
> Free software and Linux embedded engineering
> info@...ux.net -- +49-151-27565287

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ