lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 15 Feb 2008 08:55:38 -0800
From:	Dave Hansen <haveblue@...ibm.com>
To:	Christoph Raisch <RAISCH@...ibm.com>
Cc:	apw <apw@...ibm.com>, Greg KH <greg@...ah.com>,
	Jan-Bernd Themann <THEMANN@...ibm.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	linuxppc-dev@...abs.org, netdev <netdev@...r.kernel.org>,
	ossthema@...ux.vnet.ibm.com, Badari Pulavarty <pbadari@...ibm.com>,
	Thomas Q Klein <TKLEIN@...ibm.com>, tklein@...ux.ibm.com
Subject: Re: [PATCH] drivers/base: export gpl (un)register_memory_notifier

On Fri, 2008-02-15 at 14:22 +0100, Christoph Raisch wrote:
> A translation from kernel to ehea_bmap space should be fast and
> predictable
> (ruling out hashes).
> If a driver doesn't know anything else about the mapping structure,
> the normal solution in kernel for this type of problem is a multi
> level
> look up table
> like pgd->pud->pmd->pte
> This doesn't sound right to be implemented in a device driver.
> 
> We didn't see from the existing code that such a mapping to a
> contiguous
> space already exists.
> Maybe we've missed it.

I've been thinking about that, and I don't think you really *need* to
keep a comprehensive map like that.  

When the memory is in a particular configuration (range of memory
present along with unique set of holes) you get a unique ehea_bmap
configuration.  That layout is completely predictable.

So, if at any time you want to figure out what the ehea_bmap address for
a particular *Linux* virtual address is, you just need to pretend that
you're creating the entire ehea_bmap, use the same algorithm and figure
out host you would have placed things, and use that result.

Now, that's going to be a slow, crappy linear search (but maybe not as
slow as recreating the silly thing).  So, you might eventually run into
some scalability problems with a lot of packets going around.  But, I'd
be curious if you do in practice.

The other idea is that you create a mapping that is precisely 1:1 with
kernel memory.  Let's say you have two sections present, 0 and 100.  You
have a high_section_index of 100, and you vmalloc() a 100 entry array.

You need to create a *CONTIGUOUS* ehea map?  Create one like this:

EHEA_VADDR->Linux Section
0->0
1->0
2->0
3->0
...
100->100

It's contiguous.  Each area points to a valid Linux memory address.
It's also discernable in O(1) to what EHEA address a given Linux address
is mapped.  You just have a couple of duplicate entries.  

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ