lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 9 Jul 2007 17:59:47 -0700 (PDT)
From:	Christoph Lameter <clameter@....com>
To:	Nick Piggin <npiggin@...e.de>
cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	linux-fsdevel@...r.kernel.org
Subject: Re: [RFC] fsblock

On Tue, 10 Jul 2007, Nick Piggin wrote:

> > Hmmm.... I did not notice that yet but then I have not done much work 
> > there.
> 
> Notice what?

The bad code for the buffer heads.

> > > - A real "nobh" mode. nobh was created I think mainly to avoid problems
> > >   with buffer_head memory consumption, especially on lowmem machines. It
> > >   is basically a hack (sorry), which requires special code in filesystems,
> > >   and duplication of quite a bit of tricky buffer layer code (and bugs).
> > >   It also doesn't work so well for buffers with non-trivial private data
> > >   (like most journalling ones). fsblock implements this with basically a
> > >   few lines of code, and it shold work in situations like ext3.
> > 
> > Hmmm.... That means simply page struct are not working...
> 
> I don't understand you. jbd needs to attach private data to each bh, and
> that can stay around for longer than the life of the page in the pagecache.

Right. So just using page struct alone wont work for the filesystems.

> There are no changes to the filesystem API for large pages (although I
> am adding a couple of helpers to do page based bitmap ops). And I don't
> want to rely on contiguous memory. Why do you think handling of large
> pages (presumably you mean larger than page sized blocks) is strange?

We already have a way to handle large pages: Compound pages.

> Conglomerating the constituent pages via the pagecache radix-tree seems
> logical to me.

Meaning overhead to handle each page still exists? This scheme cannot 
handle large contiguous blocks as a single entity?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ