[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DCBA2FB.3040907@redhat.com>
Date: Thu, 12 May 2011 11:06:03 +0200
From: Zdenek Kabelac <zkabelac@...hat.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Milan Broz <mbroz@...hat.com>, Alasdair G Kergon <agk@...hat.com>,
Matthew Wilcox <matthew@....cx>,
Mikulas Patocka <mikulas@...ax.karlin.mff.cuni.cz>,
linux-kernel@...r.kernel.org, linux-parisc@...r.kernel.org,
Hugh Dickins <hughd@...gle.com>,
Oleg Nesterov <oleg@...hat.com>
Subject: Re: [PATCH] Don't mlock guardpage if the stack is growing up
Dne 12.5.2011 04:12, Linus Torvalds napsal(a):
> On Wed, May 11, 2011 at 1:42 AM, Milan Broz <mbroz@...hat.com> wrote:
>>
>> Another one is cryptsetup [..]
>
> Quite frankly, all security-related uses should always be happy about
> a "MCL_SPARSE" model, since there is no point in ever bringing in
> pages that haven't been used. The whole (and only) point of
> mlock[all]() for them is the "avoid to push to disk" issue.
>
> I do wonder if we really should ever do the page-in at all. We might
> simply be better off always just saying "we'll lock pages you've
> touched, that's it".
>
For LVM we need to ensure the code which might ever be executed during disk
suspend state must be paged and locked in - thus we would need MCL_SPARSE only
on several selected 'unneeded' libraries - as we are obviously not really able
to select which part of glibc might be needed during all code path (though I
guess we may find some limits). But if we are sure that some libraries and
locale files will never be used during suspend state - we do not care about
those pages at all.
So it's not like we would always need only MCL_SPARSE all the time - we would
probably need to have some control to switch i.e. glibc into MCL_ALL.
Zdenek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists