lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170510080542.GF31466@dhcp22.suse.cz>
Date:   Wed, 10 May 2017 10:05:43 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Igor Stoppa <igor.stoppa@...wei.com>
Cc:     Laura Abbott <labbott@...hat.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org,
        "kernel-hardening@...ts.openwall.com" 
        <kernel-hardening@...ts.openwall.com>
Subject: Re: RFC v2: post-init-read-only protection for data allocated
 dynamically

On Fri 05-05-17 13:42:27, Igor Stoppa wrote:
> On 04/05/17 19:49, Laura Abbott wrote:
> > [adding kernel-hardening since I think there would be interest]
> 
> thank you, I overlooked this
> 
> 
> > BPF takes the approach of calling set_memory_ro to mark regions as
> > read only. I'm certainly over simplifying but it sounds like this
> > is mostly a mechanism to have this happen mostly automatically.
> > Can you provide any more details about tradeoffs of the two approaches?
> 
> I am not sure I understand the question ...
> For what I can understand, the bpf is marking as read only something
> that spans across various pages, which is fine.
> The payload to be protected is already organized in such pages.
> 
> But in the case I have in mind, I have various, heterogeneous chunks of
> data, coming from various subsystems, not necessarily page aligned.
> And, even if they were page aligned, most likely they would be far
> smaller than a page, even a 4k page.

This aspect of various sizes makes the SLAB allocator not optimal
because it operates on caches (pools of pages) which manage objects of
the same size. You could use the maximum size of all objects and waste
some memory but you would have to know this max in advance which would
make this approach less practical. You could create more caches of
course but that still requires to know those sizes in advance.

So it smells like a dedicated allocator which operates on a pool of
pages might be a better option in the end. This depends on what you
expect from the allocator. NUMA awareness? Very effective hotpath? Very
good fragmentation avoidance? CPU cache awareness? Special alignment
requirements? Reasonable free()? Etc...

To me it seems that this being an initialization mostly thingy a simple
allocator which manages a pool of pages (one set of sealed and one for
allocations) and which only appends new objects as they fit to unsealed
pages would be sufficient for starter.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ