[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALYGNiM-XNnXT+L+b=WLRVxrxii_oxXxY3Wu1PC8mvm_6W8wNw@mail.gmail.com>
Date: Wed, 27 Jan 2016 12:09:10 +0300
From: Konstantin Khlebnikov <koct9i@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Cyrill Gorcunov <gorcunov@...il.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...uxfoundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Vegard Nossum <vegard.nossum@...cle.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
Andy Lutomirski <luto@...capital.net>,
Quentin Casasnovas <quentin.casasnovas@...cle.com>,
Kees Cook <keescook@...gle.com>, Willy Tarreau <w@....eu>,
Pavel Emelyanov <xemul@...tuozzo.com>
Subject: Re: [PATCH v3] mm: warn about VmData over RLIMIT_DATA
On Wed, Jan 27, 2016 at 1:49 AM, Andrew Morton
<akpm@...ux-foundation.org> wrote:
> On Sat, 23 Jan 2016 23:52:29 +0300 Konstantin Khlebnikov <koct9i@...il.com> wrote:
>
>> This patch fixes 84638335900f ("mm: rework virtual memory accounting")
>
> uh, I think I'll rewrite this to
>
> : This patch provides a way of working around a slight regression introduced
> : by 84638335900f ("mm: rework virtual memory accounting").
Sure.
As you see I keept this in "ignore and warn" state by default.
During testing in linux-next it was able to cauch only small limits
like 0 in case of valgrind decause bug in pages/bytes units.
I think it's a bad idea to enfornce limit in the middle of merge window.
So let's change default to "block" in the next release.
>
>> Before that commit RLIMIT_DATA have control only over size of the brk region.
>> But that change have caused problems with all existing versions of valgrind,
>> because it set RLIMIT_DATA to zero.
>>
>> This patch fixes rlimit check (limit actually in bytes, not pages)
>> and by default turns it into warning which prints at first VmData misuse:
>> "mmap: top (795): VmData 516096 exceed data ulimit 512000. Will be forbidden soon."
>>
>> Behavior is controlled by boot param ignore_rlimit_data=y/n and by sysfs
>> /sys/module/kernel/parameters/ignore_rlimit_data. For now it set to "y".
>>
>>
>> ...
>>
>> +static inline bool is_data_mapping(vm_flags_t flags)
>> +{
>> + return (flags & ((VM_STACK_FLAGS & (VM_GROWSUP | VM_GROWSDOWN)) |
>> + VM_WRITE | VM_SHARED)) == VM_WRITE;
>> +}
>
> This (copied from existing code) hurts my brain. We're saying "if it
> isn't stack and it's unshared and writable, it's data", yes?
Yes. Data vma supposed to be private, writable and without GROWSDOWN/UP.
We could make it more redable if define macro for stack growing direction.
Or redefine that data shouldn't grow in any direction and any growable
vma is a "stack",
but RLIMIT_STACK is enforced only in one direction (or not? not sure).
Anyway only few arches actually have flag VM_GROWSUP.
VM_WRITE separates Data and Code - Data can be executable, Code
should't be writable.
VM_GROWS separates Data and Stack - Stack grows automaticallly, Data is not.
Probaly stack should be writable too, but some applications might
remaps pieces of stack as read-only.
For now (except parisc and metag)
VM_GROWSDOWN | VM_EXEC is a code
VM_GROWSDOWN | VM_EXEC | VM_WRITE is a stack
VM_GROWSUP | VM_EXEC | VM_WRITE is a data (for ia64)
And yes, this hurts my brain too. But much less than previous version
of accounting.
>
> hm. I guess that's because with a shared mapping we don't know who to
> blame for the memory consumption so we blame nobody. But what about
> non-shared read-only mappings?
I have no Idea. There's a lot stange combinations. But since VmData is
supposed to be limited with RLIMIT_DATA it safer to leave them alone.
User will see them in total VmSize and able to limit with RLIMIT_AS.
To be honest RLMIT_DATA cannot limit memory consumption at all.
RLIMIT_AS cannot do anything too: applicataion can keep any
amount of data in unlinked tmpfs file and mmap them as needed.
Only memory controller can solve this.
>
> Can we please have a comment here fully explaining the thinking?
>
Ok. I'll tie this together in a form of patch.
Powered by blists - more mailing lists