lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081020101011.GA30037@elte.hu>
Date:	Mon, 20 Oct 2008 12:10:11 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Keith Packard <keithp@...thp.com>,
	Jesse Barnes <jbarnes@...tuousgeek.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Nick Piggin <nickpiggin@...oo.com.au>,
	Dave Airlie <airlied@...ux.ie>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	dri-devel@...ts.sf.net, Andrew Morton <akpm@...ux-foundation.org>,
	Yinghai Lu <yinghai@...nel.org>
Subject: Re: io resources and cached mappings (was: [git pull] drm patches
	for 2.6.27-rc1)


* Ingo Molnar <mingo@...e.hu> wrote:

> very nice!
> 
> I think we need a somewhat different abstraction though.
> 
> Firstly, regarding drivers/gpu/drm/i915/io_reserve.h, that needs to 
> move to generic code.
> 
> Secondly, wouldnt the right abstraction be to attach this 
> functionality to 'struct resource' ? [or at least create a second 
> struct that embedds struct resource]
> 
> this abstraction is definitely not a PCI thing and not a 
> detached-from-everything thing, it's an IO resource thing. We could 
> make it a property of struct resource:
> 
> struct resource {
>         resource_size_t start;
>         resource_size_t end;
>         const char *name;
>         unsigned long flags;
>         struct resource *parent, *sibling, *child;
> +       void *mapping;
> };
> 
> The APIs would be:
> 
>   int   io_resource_init_mapping(struct resource *res);
>  void   io_resource_free_mapping(struct resource *res);
>  void * io_resource_map(struct resource *res, pfn_t pfn, unsigned long offset);
>  void   io_resource_unmap(struct resource *res, void *kaddr);
> 
> Note how simple and consistent it all gets: IO resources already know 
> their physical location and their size limits. Being able to cache an 
> ioremap in a mapping [and being able to use atomic kmaps on 32-bit] is 
> a relatively simple and natural extension to the concept.
> 
> i think that would be quite acceptable - and the APIs could just 
> transparently work on it. This would also allow the PCI code to 
> automatically unmap any cached mappings from resources, when the 
> driver deinitializes.
> 
> Linus, Jesse, what do you think?

the downsize would be that we'd attach a runtime property to the 
IORESOURCE_MEM resource tree - which is a fairly static thing right now, 
after the point where we finalize the resource tree. (modulo 
device/bridge hotplug variances)

Another downside is that we might not want to map the whole thing. I.e. 
the structure of the IO memory space we want to map by drivers might be 
different from how it looks like in the resource tree.

the concept of introducing resource->mapping does not feel _that_ wrong 
though and has a couple of upsides: it could act as a natural mapping 
type serializer for example and drivers wouldnt have to explicitly 
manage ioremap results - they could just use the resource descriptor 
directly and "read" and "write" to/from it. readl/writel could be 
extended to operate on the resource descriptor transparently, getting 
rid of a source of resource mismatches and overmapping, etc. etc. We 
could even safety check IO space accesses this way.

and we'd get rid of the complication that your APIs introduced, the need 
to introduce a separate io_mapping type, etc.

Dunno, i might be missing some obvious downside why this wasnt done like 
that until now.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ