lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 26 Jul 2007 11:16:34 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Hugh Dickins <hugh@...itas.com>
cc:	Jens Axboe <jens.axboe@...cle.com>, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Adam Litke <agl@...ibm.com>,
	David Gibson <david@...son.dropbear.id.au>,
	Ken Chen <kenchen@...gle.com>, Bill Irwin <wli@...omorphy.com>
Subject: Re: [PATCH] Check for compound pages in set_page_dirty()

On Thu, 26 Jul 2007, Hugh Dickins wrote:

> > We would need to redirect all of the page state determinations and changes 
> > to the head page anyways. So the memory.c code would have to deal with two 
> > struct page pointers: One to the head where the state is kept and one to 
> > the tail page that contains the actual chunk of data we are interested in. 
> > The tail page pointer is only used for address determinations.
> > 
> > VM functions that manipulate the state of a page (like set_page_dirty) 
> > could rely on only getting page heads.
> 
> Maybe.  Sounds ugly.  "would": so your patches remain just an RFC?

The large blocksize patch currently does not support mmap. I just have 
some patches here that implement some of that using the approach that I 
described.

And without mmap support we never have to use references to tail pages 
anyways. 

We could avoid references to tail pages if we would not allow the mapping 
of 4k subsections of larger pages but instead require that a compound page 
always be mapped in its entirety. That would keep the necessary changes to 
memory.c minimal but would cause trouble for applications that expect to 
be able to map 4k chunks.

If we want to support transparent use of 2M pages then we need to do this 
anyways but at that point we can still have a single large "pte" (well 
really a pmd that we treat as a pte).

If we f.e. require an order 3 page to be mapped in one go then we would 
have to install 8 ptes at once. If we allow mapping of 4k sections that we 
can have mm/memory.c deal with one pte at a time.




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ