lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1203531242.15017.20.camel@nimitz.home.sr71.net>
Date:	Wed, 20 Feb 2008 10:14:02 -0800
From:	Dave Hansen <haveblue@...ibm.com>
To:	Jan-Bernd Themann <ossthema@...ibm.com>
Cc:	linuxppc-dev@...abs.org, Christoph Raisch <RAISCH@...ibm.com>,
	Thomas Q Klein <TKLEIN@...ibm.com>,
	ossthema@...ux.vnet.ibm.com,
	Jan-Bernd Themann <THEMANN@...ibm.com>,
	Greg KH <greg@...ah.com>, apw <apw@...ibm.com>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Badari Pulavarty <pbadari@...ibm.com>,
	netdev <netdev@...r.kernel.org>, tklein@...ux.ibm.com
Subject: Re: [PATCH] drivers/base: export gpl (un)register_memory_notifier

On Mon, 2008-02-18 at 11:00 +0100, Jan-Bernd Themann wrote:
> Dave Hansen <haveblue@...ibm.com> wrote on 15.02.2008 17:55:38:
> 
> > I've been thinking about that, and I don't think you really *need* to
> > keep a comprehensive map like that. 
> > 
> > When the memory is in a particular configuration (range of memory
> > present along with unique set of holes) you get a unique ehea_bmap
> > configuration.  That layout is completely predictable.
> > 
> > So, if at any time you want to figure out what the ehea_bmap address for
> > a particular *Linux* virtual address is, you just need to pretend that
> > you're creating the entire ehea_bmap, use the same algorithm and figure
> > out host you would have placed things, and use that result.
> > 
> > Now, that's going to be a slow, crappy linear search (but maybe not as
> > slow as recreating the silly thing).  So, you might eventually run into
> > some scalability problems with a lot of packets going around.  But, I'd
> > be curious if you do in practice.
> 
> Up to 14 addresses translation per packet (sg_list) might be required on 
> the transmit side. On receive side it is only 1. Most packets require only 
> very few translations (1 or sometimes more)  translations. However, 
> with more then 700.000 packets per second this approach does not seem 
> reasonable from performance perspective when memory is fragmented as you
> described.

OK, but let's see the data.  *SHOW* me that it's slow. If the algorithm
works, then perhaps we can simply speed it up with a little caching and
*MUCH* less memory overhead.

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ