lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 26 Jan 2007 11:58:18 -0800 (PST)
From:	Christoph Lameter <clameter@....com>
To:	Andrew Morton <akpm@...l.org>
cc:	Mel Gorman <mel@....ul.ie>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Christoph Lameter <clameter@...r.sgi.com>
Subject: Re: [PATCH 0/8] Create ZONE_MOVABLE to partition memory between
 movable and non-movable pages

On Fri, 26 Jan 2007, Andrew Morton wrote:

> As Mel points out, distros will ship with CONFIG_ZONE_DMA=y, so the number
> of machines which will actually benefit from this change is really small. 
> And the benefit to those few machines will also, I suspect, be small.
> 
> > > - We kicked around some quite different ways of implementing the same
> > >   things, but nothing came of it.  iirc, one was to remove the hard-coded
> > >   zones altogether and rework all the MM to operate in terms of
> > > 
> > > 	for (idx = 0; idx < NUMBER_OF_ZONES; idx++)
> > > 		...
> > 
> > Hmmm.. How would that be simpler?
> 
> Replace a sprinkle of open-coded ifdefs with a regular code sequence which
> everyone uses.  Pretty obvious, I'd thought.

We do use such loops in many places. However, stuff like array 
initialization and special casing cannot use a loop. I am not sure what we 
could change there. The hard coding is necessary because each zone 
currently has these invariant characteristics that we need to consider. 
Reducing the number of zones reduces the amount of special casing in the 
VM that needs to be considered at run time and that is a potential issue
for trouble.

> Plus it becoems straightforward to extend this from the present four zones
> to a complete 12 zones, which gives use the full set of
> ZONE_DMA20,ZONE_DMA21,...,ZONE_DMA32 for those funny devices.

I just hope we can handle the VM complexity of load balancing etc etc that 
this will introduce. Also each zone has management overhead and will cause 
the touching of additional cachelines on many VM operations. Much of that 
management overhead becomes unnecessary if we reduce zones.

> If the only demonstrable benefit is a saving of a few k of text on a small
> number of machines then things are looking very grim, IMO.

The main benefit is a significant simplification of the VM, leading to 
robust and reliable operations and a reduction of the maintenance 
headaches coming with the additional zones.

If we would introduce the ability of allocating from a range of 
physical addresses then the need for DMA zones would go away allowing 
flexibility for device driver DMA allocations and at the same time we get 
rid of special casing in the VM.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ