lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 16 Mar 2015 19:49:32 +0900
From:	Roman Peniaev <r.peniaev@...il.com>
To:	Gioh Kim <gioh.kim@....com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Eric Dumazet <edumazet@...gle.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	David Rientjes <rientjes@...gle.com>,
	WANG Chao <chaowang@...hat.com>,
	Fabian Frederick <fabf@...net.be>,
	Christoph Lameter <cl@...ux.com>,
	Rob Jones <rob.jones@...ethink.co.uk>, linux-mm@...ck.org,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: Re: [PATCH 0/3] [RFC] mm/vmalloc: fix possible exhaustion of vmalloc space

On Mon, Mar 16, 2015 at 7:28 PM, Gioh Kim <gioh.kim@....com> wrote:
>
>
> 2015-03-13 오후 9:12에 Roman Pen 이(가) 쓴 글:
>> Hello all.
>>
>> Recently I came across high fragmentation of vm_map_ram allocator: vmap_block
>> has free space, but still new blocks continue to appear.  Further investigation
>> showed that certain mapping/unmapping sequence can exhaust vmalloc space.  On
>> small 32bit systems that's not a big problem, cause purging will be called soon
>> on a first allocation failure (alloc_vmap_area), but on 64bit machines, e.g.
>> x86_64 has 45 bits of vmalloc space, that can be a disaster.
>
> I think the problem you comments is already known so that I wrote comments about it as
> "it could consume lots of address space through fragmentation".
>
> Could you tell me about your situation and reason why it should be avoided?

In the first patch of this set I explicitly described the function,
which exhausts
vmalloc space without any chance to be purged: vm_map_ram allocator is
greedy and firstly
tries to occupy newly allocated block, even old blocks contain enough
free space.

This can be easily fixed if we put newly allocated block (which has
enough space to
complete further requests) to the tail of a free list, to give a
chance to old blocks.

Why it should be avoided?  Strange question.  For me it looks like a
bug of an allocator,
which should be fair and should not continuously allocate new blocks
without lazy purging
(seems vmap_lazy_nr and  __purge_vmap_area_lazy were created exactly
for those reasons:
 to avoid infinite allocations)


--
Roman


>
>
>>
>> Fixing this I also did some tweaks in allocation logic of a new vmap block and
>> replaced dirty bitmap with min/max dirty range values to make the logic simpler.
>>
>> I would like to receive comments on the following three patches.
>>
>> Thanks.
>>
>> Roman Pen (3):
>>    mm/vmalloc: fix possible exhaustion of vmalloc space caused by
>>      vm_map_ram allocator
>>    mm/vmalloc: occupy newly allocated vmap block just after allocation
>>    mm/vmalloc: get rid of dirty bitmap inside vmap_block structure
>>
>>   mm/vmalloc.c | 94 ++++++++++++++++++++++++++++++++++--------------------------
>>   1 file changed, 54 insertions(+), 40 deletions(-)
>>
>> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> Cc: Nick Piggin <npiggin@...nel.dk>
>> Cc: Eric Dumazet <edumazet@...gle.com>
>> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
>> Cc: David Rientjes <rientjes@...gle.com>
>> Cc: WANG Chao <chaowang@...hat.com>
>> Cc: Fabian Frederick <fabf@...net.be>
>> Cc: Christoph Lameter <cl@...ux.com>
>> Cc: Gioh Kim <gioh.kim@....com>
>> Cc: Rob Jones <rob.jones@...ethink.co.uk>
>> Cc: linux-mm@...ck.org
>> Cc: linux-kernel@...r.kernel.org
>> Cc: stable@...r.kernel.org
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ