[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57153751.7080800@redhat.com>
Date: Mon, 18 Apr 2016 12:36:49 -0700
From: Laura Abbott <labbott@...hat.com>
To: Thomas Garnier <thgarnie@...gle.com>, Joe Perches <joe@...ches.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Kees Cook <keescook@...omium.org>,
Greg Thelen <gthelen@...gle.com>,
Laura Abbott <labbott@...oraproject.org>,
kernel-hardening@...ts.openwall.com,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH] mm: SLAB freelist randomization
On 04/18/2016 08:59 AM, Thomas Garnier wrote:
> I will send the next version today. Note that I get_random_bytes_arch
> is used because at that stage we have 0 bits of entropy. It seemed
> like a better idea to use the arch version that will fallback on
> get_random_bytes sub API in the worse case.
>
This is unfortunate for ARM/ARM64. Those platforms don't have a standard
method for getting random numbers so until additional entropy is added
get_random_bytes will always return the same seed and indeed I always
see the same shuffle on a quick test of arm64. For KASLR, the workaround
was to require the bootloader to pass in entropy. It might be good to
either document this or require this only be used with CONFIG_ARCH_RANDOM.
> On Fri, Apr 15, 2016 at 3:47 PM, Thomas Garnier <thgarnie@...gle.com> wrote:
>> Thanks for the comments. I will address them in a v2 early next week.
>>
>> If anyone has other comments, please let me know.
>>
>> Thomas
>>
>> On Fri, Apr 15, 2016 at 3:26 PM, Joe Perches <joe@...ches.com> wrote:
>>> On Fri, 2016-04-15 at 15:00 -0700, Andrew Morton wrote:
>>>> On Fri, 15 Apr 2016 10:25:59 -0700 Thomas Garnier <thgarnie@...gle.com> wrote:
>>>>> Provide an optional config (CONFIG_FREELIST_RANDOM) to randomize the
>>>>> SLAB freelist. The list is randomized during initialization of a new set
>>>>> of pages. The order on different freelist sizes is pre-computed at boot
>>>>> for performance. This security feature reduces the predictability of the
>>>>> kernel SLAB allocator against heap overflows rendering attacks much less
>>>>> stable.
>>>
>>> trivia:
>>>
>>>>> @@ -1229,6 +1229,61 @@ static void __init set_up_node(struct kmem_cache *cachep, int index)
>>> []
>>>>> + */
>>>>> +static freelist_idx_t master_list_2[2];
>>>>> +static freelist_idx_t master_list_4[4];
>>>>> +static freelist_idx_t master_list_8[8];
>>>>> +static freelist_idx_t master_list_16[16];
>>>>> +static freelist_idx_t master_list_32[32];
>>>>> +static freelist_idx_t master_list_64[64];
>>>>> +static freelist_idx_t master_list_128[128];
>>>>> +static freelist_idx_t master_list_256[256];
>>>>> +static struct m_list {
>>>>> + size_t count;
>>>>> + freelist_idx_t *list;
>>>>> +} master_lists[] = {
>>>>> + { ARRAY_SIZE(master_list_2), master_list_2 },
>>>>> + { ARRAY_SIZE(master_list_4), master_list_4 },
>>>>> + { ARRAY_SIZE(master_list_8), master_list_8 },
>>>>> + { ARRAY_SIZE(master_list_16), master_list_16 },
>>>>> + { ARRAY_SIZE(master_list_32), master_list_32 },
>>>>> + { ARRAY_SIZE(master_list_64), master_list_64 },
>>>>> + { ARRAY_SIZE(master_list_128), master_list_128 },
>>>>> + { ARRAY_SIZE(master_list_256), master_list_256 },
>>>>> +};
>>>
>>> static const struct m_list?
>>>
Powered by blists - more mailing lists