lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d1a35583-8225-2ab3-d9fa-273482615d09@intel.com>
Date:   Wed, 20 Sep 2017 17:27:02 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Tycho Andersen <tycho@...ker.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        kernel-hardening@...ts.openwall.com,
        Marco Benatto <marco.antonio.780@...il.com>,
        Juerg Haefliger <juerg.haefliger@...onical.com>, x86@...nel.org
Subject: Re: [PATCH v6 03/11] mm, x86: Add support for eXclusive Page Frame
 Ownership (XPFO)

On 09/20/2017 05:09 PM, Tycho Andersen wrote:
>> I think the only thing that will really help here is if you batch the
>> allocations.  For instance, you could make sure that the per-cpu-pageset
>> lists always contain either all kernel or all user data.  Then remap the
>> entire list at once and do a single flush after the entire list is consumed.
> Just so I understand, the idea would be that we only flush when the
> type of allocation alternates, so:
> 
> kmalloc(..., GFP_KERNEL);
> kmalloc(..., GFP_KERNEL);
> /* remap+flush here */
> kmalloc(..., GFP_HIGHUSER);
> /* remap+flush here */
> kmalloc(..., GFP_KERNEL);

Not really.  We keep a free list per migrate type, and a per_cpu_pages
(pcp) list per migratetype:

> struct per_cpu_pages {
>         int count;              /* number of pages in the list */
>         int high;               /* high watermark, emptying needed */
>         int batch;              /* chunk size for buddy add/remove */
> 
>         /* Lists of pages, one per migrate type stored on the pcp-lists */
>         struct list_head lists[MIGRATE_PCPTYPES];
> };

The migratetype is derived from the GFP flags in
gfpflags_to_migratetype().  In general, GFP_HIGHUSER and GFP_KERNEL come
from different migratetypes, so they come from different free lists.

In your case above, the GFP_HIGHUSER allocation come through the
MIGRATE_MOVABLE pcp list while the GFP_KERNEL ones come from the
MIGRATE_UNMOVABLE one.  Since we add a bunch of pages to those lists at
once, you could do all the mapping/unmapping/flushing on a bunch of
pages at once

Or, you could hook your code into the places where the migratetype of
memory is changed (set_pageblock_migratetype(), plus where we fall
back).  Those changes are much more rare than page allocation.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ