lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1354115609.1479.91.camel@tellur>
Date:	Wed, 28 Nov 2012 16:13:29 +0100
From:	Lucas Stach <dev@...xeye.de>
To:	Terje Bergström <tbergstrom@...dia.com>
Cc:	Dave Airlie <airlied@...il.com>,
	Thierry Reding <thierry.reding@...onic-design.de>,
	"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
	"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Arto Merilainen <amerilainen@...dia.com>
Subject: Re: [RFC v2 8/8] drm: tegra: Add gr2d device

Am Mittwoch, den 28.11.2012, 16:45 +0200 schrieb Terje Bergström:
> On 28.11.2012 16:06, Lucas Stach wrote:
> > Why do even need/use dma-buf for this use case? This is all one DRM
> > device, even if we separate host1x and gr2d as implementation modules.
> 
> I didn't want to implement dependency to drm gem objects in nvhost, but
> we have thought about doing that. dma-buf brings quite a lot of
> overhead, so implementing support for gem buffers would make the
> sequence a bit leaner.
> 
> nvhost already has infra to support multiple memory managers.
> 
To be honest I still don't grok all of this, but nonetheless I try my
best.

Anyway, shouldn't nvhost be something like an allocator used by host1x
clients? With the added ability to do relocs/binding of buffers into
client address spaces, refcounting buffers and import/export dma-bufs?
In this case nvhost objects would just be used to back DRM GEM objects.
If using GEM objects in the DRM driver introduces any cross dependencies
with nvhost, you should take a step back and ask yourself if the current
design is the right way to go.

> > So standard way of doing this is:
> > 1. create gem object for pushbuffer
> > 2. create fake mmap offset for gem obj
> > 3. map pushbuf using the fake offset on the drm device
> > 4. at submit time zap the mapping
> > 
> > You need this logic anyway, as normally we don't rely on userspace to
> > sync gpu and cpu, but use the kernel to handle the concurrency issues.
> 
> Taking a step back - 2D streams are actually very short, in the order of
> <100 bytes. Just copying them to kernel space would actually be faster
> than doing MMU operations.
> 
Is this always the case because of the limited abilities of the gr2d
engine, or is it just your current driver flushing the stream very
often?

> I think for Tegra20 and non-IOMMU case, we just need to copy the command
> stream to kernel buffer. In Tegra30 IOMMU case reference to user space
> buffers are fine, as tampering the streams doesn't have any ill effects.
> 
In which way is it a good design choice to let the CPU happily alter
_any_ buffer the GPU is busy processing without getting the concurrency
right?

Please keep in mind that the interfaces you are now trying to introduce
have to be supported for virtually unlimited time. You might not be able
to scrub your mistakes later on without going through a lot of hassles.

To avoid a lot of those mistakes it might be a good idea to look at how
other drivers use the DRM infrastructure and only part from those proven
schemes where really necessary/worthwhile.

Regards,
Lucas


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ