lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ee8d74f8-d90e-98ce-79bf-569693000320@gmail.com>
Date:	Tue, 14 Jun 2016 19:46:17 +0000
From:	Topi Miettinen <toiwoton@...il.com>
To:	Konstantin Khlebnikov <koct9i@...il.com>
Cc:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 00/18] Present useful limits to user

On 06/14/16 19:03, Konstantin Khlebnikov wrote:
> I don't like the idea of this patchset.
> 
> All limitations are context dependent and that context changes rapidly.
> You'll never dump enough information for predicting future errors or
> investigating reson of errors in past. You could try to reproduce all
> kernel logic but model always will be aproximate.
> 

But that is true regardless of how the starting point for the limits was
determined. There will be always a possibility of setting too tight
limits which may work for a couple of test runs but which could also
eventually fail. The opposite is also possible, to use too loose limits
which are not effective. That's the way with limits in any case.

> If you want to track origin of failures in user space applications when it hits
> some limit you should track errors. For example rlimits and other limitation
> subsystems could provide resonable amount of tracepoints which could
> tell what exactly happened before error. If you need highwater of some
> values you could track it in userspace, or maybe tracing subsystem could
> provide postpocessing for tracepoint parameters. Anyway, systemtap and
> other monsters can do this right now.
> 

Those tools could help improving the starting point. But how could they
give exact value for that?

With this patch set, the user can just look at files in /proc and simply
copy the values to a config file as a starting point. What would be the
work flow with the tracepoint approach?

> On Mon, Jun 13, 2016 at 10:44 PM, Topi Miettinen <toiwoton@...il.com> wrote:
>> Hello,
>>
>> There are many basic ways to control processes, including capabilities,
>> cgroups and resource limits. However, there are far fewer ways to find out
>> useful values for the limits, except blind trial and error.
>>
>> This patch series attempts to fix that by giving at least a nice starting
>> point from the actual maximum values. I looked where each limit is checked
>> and added a call to limit bump nearby.
>>
>>
>> Capabilities
>> [RFC 01/18] capabilities: track actually used capabilities
>>
>> Currently, there is no way to know which capabilities are actually used. Even
>> the source code is only implicit, in-depth knowledge of each capability must
>> be used when analyzing a program to judge which capabilities the program will
>> exercise.
>>
>> Cgroups
>> [RFC 02/18] cgroup_pids: track maximum pids
>> [RFC 03/18] memcontrol: present maximum used memory also for
>> [RFC 04/18] device_cgroup: track and present accessed devices
>>
>> For tasks and memory cgroup limits the situation is somewhat better as the
>> current tasks and memory status can be easily seen with ps(1). However, any
>> transient tasks or temporary higher memory use might slip from the view.
>> Device use may be seen with advanced MAC tools, like TOMOYO, but there is no
>> universal method. Program sources typically give no useful indication about
>> memory use or how many tasks there could be.
>>
>> Resource limits
>> [RFC 05/18] limits: track and present RLIMIT_NOFILE actual max
>> [RFC 06/18] limits: present RLIMIT_CPU and RLIMIT_RTTIMER current
>> [RFC 07/18] limits: track RLIMIT_FSIZE actual max
>> [RFC 08/18] limits: track RLIMIT_DATA actual max
>> [RFC 09/18] limits: track RLIMIT_CORE actual max
>> [RFC 10/18] limits: track RLIMIT_STACK actual max
>> [RFC 11/18] limits: track and present RLIMIT_NPROC actual max
>> [RFC 12/18] limits: track RLIMIT_MEMLOCK actual max
>> [RFC 13/18] limits: track RLIMIT_AS actual max
>> [RFC 14/18] limits: track RLIMIT_SIGPENDING actual max
>> [RFC 15/18] limits: track RLIMIT_MSGQUEUE actual max
>> [RFC 16/18] limits: track RLIMIT_NICE actual max
>> [RFC 17/18] limits: track RLIMIT_RTPRIO actual max
>> [RFC 18/18] proc: present VM_LOCKED memory in /proc/self/maps
>>
>> Current number of files and current VM usage (data pages, address space size)
>> could be calculated from available /proc files. Again, any temporarily higher
>> values could be easily missed. For many limits, there is no way to see what
>> is the current situation and source code is mostly useless.
>>
>> As a side note, the resouce limits seem to be in bad shape. For example,
>> RLIMIT_MEMLOCK is used incoherently and I think VM statistics can miss
>> some changes. Adding RLIMIT_CODE could be useful.
>>
>> The current maximum values for the resource limits are now shown in
>> /proc/task/limits. If this is deemed too confusing for the existing
>> programs which rely on the exact format, I can change that to a new file.
>>
>>
>> Finally, the patches work in my testing but I have probably missed finer
>> lock/RCU details.
>>
>> -Topi
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ