lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200802181100.12995.ossthema@de.ibm.com>
Date:	Mon, 18 Feb 2008 11:00:11 +0100
From:	Jan-Bernd Themann <ossthema@...ibm.com>
To:	linuxppc-dev@...abs.org
Cc:	Dave Hansen <haveblue@...ibm.com>,
	Christoph Raisch <RAISCH@...ibm.com>,
	Thomas Q Klein <TKLEIN@...ibm.com>,
	ossthema@...ux.vnet.ibm.com,
	Jan-Bernd Themann <THEMANN@...ibm.com>,
	Greg KH <greg@...ah.com>, apw <apw@...ibm.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Badari Pulavarty <pbadari@...ibm.com>,
	netdev <netdev@...r.kernel.org>, tklein@...ux.ibm.com
Subject: Re: [PATCH] drivers/base: export gpl (un)register_memory_notifier

switching to proper mail client...

Dave Hansen <haveblue@...ibm.com> wrote on 15.02.2008 17:55:38:

> I've been thinking about that, and I don't think you really *need* to
> keep a comprehensive map like that. 
> 
> When the memory is in a particular configuration (range of memory
> present along with unique set of holes) you get a unique ehea_bmap
> configuration.  That layout is completely predictable.
> 
> So, if at any time you want to figure out what the ehea_bmap address for
> a particular *Linux* virtual address is, you just need to pretend that
> you're creating the entire ehea_bmap, use the same algorithm and figure
> out host you would have placed things, and use that result.
> 
> Now, that's going to be a slow, crappy linear search (but maybe not as
> slow as recreating the silly thing).  So, you might eventually run into
> some scalability problems with a lot of packets going around.  But, I'd
> be curious if you do in practice.

Up to 14 addresses translation per packet (sg_list) might be required on 
the transmit side. On receive side it is only 1. Most packets require only 
very few translations (1 or sometimes more)  translations. However, 
with more then 700.000 packets per second this approach does not seem 
reasonable from performance perspective when memory is fragmented as you
described.

> 
> The other idea is that you create a mapping that is precisely 1:1 with
> kernel memory.  Let's say you have two sections present, 0 and 100.  You
> have a high_section_index of 100, and you vmalloc() a 100 entry array.
> 
> You need to create a *CONTIGUOUS* ehea map?  Create one like this:
> 
> EHEA_VADDR->Linux Section
> 0->0
> 1->0
> 2->0
> 3->0
> ...
> 100->100
> 
> It's contiguous.  Each area points to a valid Linux memory address.
> It's also discernable in O(1) to what EHEA address a given Linux address
> is mapped.  You just have a couple of duplicate entries. 

This has a serious issues with constraint I mentions in the previous mail: 

"- MRs can have a maximum size of the memory available under linux"

The requirement is not met that the memory region must not be 
larger then the available memory for that partition. The "create MR" 
H_CALL will fails (we tried this and discussed with FW development)


Regards,
Jan-Bernd & Christoph
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ