lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <b7bb1884-3125-5c98-f1fe-53b974454ce2@huawei.com>
Date:   Thu, 4 May 2017 11:17:25 +0300
From:   Igor Stoppa <igor.stoppa@...wei.com>
To:     Dave Hansen <dave.hansen@...el.com>,
        Michal Hocko <mhocko@...nel.org>
CC:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: RFC v2: post-init-read-only protection for data allocated
 dynamically

Hi,
I suspect this was accidentally a Reply-To instead of a Reply-All,
so I'm putting back the CCs that were dropped.

On 03/05/17 21:41, Dave Hansen wrote:
> On 05/03/2017 05:06 AM, Igor Stoppa wrote:
>> My starting point are the policy DB of SE Linux and the LSM Hooks, but
>> eventually I would like to extend the protection also to other
>> subsystems, in a way that can be merged into mainline.
> 
> Have you given any thought to just having a set of specialized slabs?

No, the idea of the RFC was to get this sort of comments about options I
might have missed :-)

> Today, for instance, we have a separate set of kmalloc() slabs for DMA:
> dma-kmalloc-{4096,2048,...}.  It should be quite possible to have
> another set for your post-init-read-only protected data.

I will definitely investigate it and report back, thanks.
But In the meanwhile I'd appreciate further clarifications.
Please see below ...

> This doesn't take care of vmalloc(), but I have the feeling that
> implementing this for vmalloc() isn't going to be horribly difficult.

ok

>> * The mechanism used for locking down the memory region is to program
>> the MMU to trap writes to said region. It is fairly efficient and
>> HW-backed, so it doesn't introduce any major overhead,
> 
> I'd take a bit of an issue with this statement.  It *will* fracture
> large pages unless you manage to pack all of these allocations entirely
> within a large page.  This is problematic because we use the largest
> size available, and that's 1GB on x86.

I am not sure I fully understand this part.
I am probably missing some point about the way kmalloc works.

I get the problem you describe, but I do not understand why it should
happen.
Going back for a moment to my original idea of the zone, as a physical
address range, why wouldn't it be possible to define it as one large page?

Btw, I do not expect to have much memory occupation, in terms of sheer
size, although there might be many small "variables" scattered across
the code. That's where I hope using kmalloc, instead of a custom made
allocator can make a difference, in terms of optimal occupation.

> IOW, if you scatter these things throughout the address space, you may
> end up fracturing/demoting enough large pages to cause major overhead
> refilling the TLB.

But why would I?
Or, better, what would cause it, unless I take special care?

Or, let me put it differently: my goal is to not fracture more pages
than needed.
It will probably require some profiling to figure out what is the
ballpark of the memory footprint.

I might have overlooked some aspect of this, but the overall goal
is to have a memory range (I won't call it zone, to avoid referring to a
specific implementation) which is as tightly packed as possible, stuffed
with all the data that is expected to become read-only.

> Note that this only applies for kmalloc() allocations, *not* vmalloc()
> since kmalloc() uses the kernel linear map and vmalloc() uses it own,
> separate mappings.

Yes.

---
thanks, igor

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ