[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140716152528.GZ15237@phenom.ffwll.local>
Date: Wed, 16 Jul 2014 17:25:28 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Jerome Glisse <j.glisse@...il.com>
Cc: Daniel Vetter <daniel@...ll.ch>,
"Bridgman, John" <John.Bridgman@....com>,
"Lewycky, Andrew" <Andrew.Lewycky@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"Deucher, Alexander" <Alexander.Deucher@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>
Subject: Re: [PATCH 00/83] AMD HSA kernel driver
On Wed, Jul 16, 2014 at 10:52:56AM -0400, Jerome Glisse wrote:
> On Wed, Jul 16, 2014 at 10:27:42AM +0200, Daniel Vetter wrote:
> > On Tue, Jul 15, 2014 at 8:04 PM, Jerome Glisse <j.glisse@...il.com> wrote:
> > >> Yes although it can be skipped on most systems. We figured that topology
> > >> needed to cover everything that would be handled by a single OS image, so
> > >> in a NUMA system it would need to cover all the CPUs. I think that is still
> > >> the right scope, do you agree ?
> > >
> > > I think it is a idea to duplicate cpu. I would rather have each device
> > > give its afinity against each cpu and for cpu just keep the existing
> > > kernel api that expose this through sysfs iirc.
> >
> > It's all there already if we fix up the hsa dev-node model to expose
> > one dev node per underlying device instead of one for everything:
> > - cpus already expose the full numa topology in sysfs
> > - pci devices have a numa_node file in sysfs to display the link
> > - we can easily add similar stuff for platform devices on arm socs
> > without pci devices.
> >
> > Then the only thing userspace needs to do is follow the device link in
> > the hsa instance node in sysfs and we have all the information
> > exposed. Iff we expose one hsa driver instance to userspace per
> > physical device (which is the normal linux device driver model
> > anyway).
> >
> > I don't see a need to add anything hsa specific here at all (well
> > maybe some description of the cache architecture on the hsa device
> > itself, the spec seems to have provisions for that).
> > -Daniel
>
> What is HSA specific is userspace command queue in form of common ring
> buffer execution queue all sharing common packet format. So yes i see
> a reason for an HSA class that provide common ioctl through one dev file
> per device. Note that i am not a fan of userspace command queue given
> that linux ioctl overhead is small and having kernel do stuff would
> allow for really "infinite" number of userspace context while right
> now limit is DOORBELL_APERTURE_SIZE/PAGE_SIZE.
>
> No, CPU should not be included, neither should the numa topology of
> device. And yes all numa topology should use existing kernel interface.
> I however understand that a second GPU specific topology might make
> sense ie if you have specialize link btw some discrete GPU.
>
> So if Intel wants to join the HSA foundation fine, but unless you are
> ready to implement what is needed i do not see the value of forcing
> your wish on another group that is trying to standardize something.
You're mixing up my replies ;-) This was really just a comment on the
proposed hsa interfaces for exposing the topology - we already have all
the stuff exposed in sysfs for cpus and pci devices, so exposing this
again through a hsa specific interface doesn't make much sense imo.
What intel does or not does is completely irrelevant for my comment. I.e.
I've written the above with my drm hacker hat on, not with my intel hat
on.
Cheers, Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists