lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 10 Nov 2012 20:04:26 +0200
From:	Terje Bergström <tbergstrom@...dia.com>
To:	Thierry Reding <thierry.reding@...onic-design.de>
CC:	Rob Clark <robdclark@...il.com>, Dave Airlie <airlied@...hat.com>,
	"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
	"devicetree-discuss@...ts.ozlabs.org" 
	<devicetree-discuss@...ts.ozlabs.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>
Subject: Re: [PATCH 1/2] drm: Add NVIDIA Tegra20 support

On 09.11.2012 23:03, Thierry Reding wrote:
> On Fri, Nov 09, 2012 at 10:26:27AM -0600, Rob Clark wrote:
>> hmm, are these values defined by the hw? They look like register 
>> offsets into the DC block? 
> I don't think they are defined by the hardware. From what I gather these
> can arbitrarily be assigned by software. If things actually work the way
> I think they do, then eventually these values could be allocated by the
> host1x_register_client() function and stored within the host1x_client
> structure, so that each HW block can program them into the corresponding
> register.
These are host1x sync points. Sync points are used to synchronize work 
between host1x, host1x client units (like DC, 2D, EPP, etc), and CPU. 
Tegra2 TRM now contains chapters for HOST1X, 2D and EPP, so it has some 
more details.

The assignment of sync points is a software policy. Depending on 
programming model of client unit, one or more sync points are used for 
each. For example, for each DC we have one sync point assigned to 
vblank, and one for each DC window. For 2D, we'd have one sync point, 
and a choice of using the same of different sync point for EPP.

We could either assign sync point registers by hard coding, or assign 
them dynamically one per client unit, and possibly an additional one 
depending on the programming model. Sync points are a scarce resource, 
so we've so far preferred to do static assignment to catch 
overallocation as early as possible.

Terje
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ