[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070416163057.GH355@devserv.devel.redhat.com>
Date: Mon, 16 Apr 2007 12:30:57 -0400
From: Jakub Jelinek <jakub@...hat.com>
To: Anton Blanchard <anton@...ba.org>
Cc: Rik van Riel <riel@...hat.com>,
Nick Piggin <nickpiggin@...oo.com.au>,
Eric Dumazet <dada1@...mosbay.com>,
linux-kernel <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Ulrich Drepper <drepper@...hat.com>
Subject: Re: [PATCH] make MADV_FREE lazily free memory
On Mon, Apr 16, 2007 at 11:10:39AM -0500, Anton Blanchard wrote:
> > Making the pte clean also needs to clear the hardware writable
> > bit on architectures where we do pte dirtying in software.
> >
> > If we don't, we would have corruption problems all over the VM,
> > for example in the code around pte_clean_one :)
> >
> > >But as Linus recently said, even hardware handled faults still
> > >take expensive microarchitectural traps.
> >
> > Nowhere near as expensive as a full page fault, though...
>
> Unfortunately it will be expensive on architectures that have software
> referenced and changed. It would be great if we could just leave them
> dirty in the pagetables and transition between a clean and dirty state
> via madvise calls, but thats just wishful thinking on my part :)
That would mean an additional syscall. Furthermore, if you allocate a big
chunk of memory, dirty it, then free (with madvise (MADV_FREE)) it and soon
allocate the same size of memory again, it is better to start that with
non-dirty memory, it might be that this time you e.g. don't modify a big
part of the chunk. If all that memory was kept dirty all the time and
just marked/unmarked for lazy reuse with MADV_FREE/MADV_UNDO_FREE, all that
memory would need to be saved to disk when paging out as it was marked
dirty, while with current Rik's MADV_FREE that will happen only for pages
that were actually dirtied after the last malloc.
Jakub
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists