lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zlzpqlc8.fsf@informatik.uni-tuebingen.de>
Date:	Fri, 14 Sep 2007 18:10:31 +0200
From:	Goswin von Brederlow <brederlo@...ormatik.uni-tuebingen.de>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Christoph Lameter <clameter@....com>, Mel Gorman <mel@....ul.ie>,
	andrea@...e.de, torvalds@...ux-foundation.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	Christoph Hellwig <hch@....de>, Mel Gorman <mel@...net.ie>,
	William Lee Irwin III <wli@...omorphy.com>,
	David Chinner <dgc@....com>,
	Jens Axboe <jens.axboe@...cle.com>,
	Badari Pulavarty <pbadari@...il.com>,
	Maxim Levitsky <maximlevitsky@...il.com>,
	Fengguang Wu <fengguang.wu@...il.com>,
	swin wang <wangswin@...il.com>, totty.lu@...il.com,
	hugh@...itas.com, joern@...ybastard.org
Subject: Re: [00/41] Large Blocksize Support V7 (adds memmap support)

Hi,


Nick Piggin <nickpiggin@...oo.com.au> writes:

> In my attack, I cause the kernel to allocate lots of unmovable allocations
> and deplete movable groups. I theoretically then only need to keep a
> small number (1/2^N) of these allocations around in order to DoS a
> page allocation of order N.

I'm assuming that when an unmovable allocation hijacks a movable group
any further unmovable alloc will evict movable objects out of that
group before hijacking another one. right?

> And it doesn't even have to be a DoS. The natural fragmentation
> that occurs today in a kernel today has the possibility to slowly push out
> the movable groups and give you the same situation.

How would you cause that? Say you do want to purposefully place one
unmovable 4k page into every 64k compund page. So you allocate
4K. First 64k page locked. But now, to get 4K into the second 64K page
you have to first use up all the rest of the first 64k page. Meaning
one 4k chunk, one 8k chunk, one 16k cunk, one 32k chunk. Only then
will a new 64k chunk be broken and become locked.

So to get the last 64k chunk used all previous 32k chunks need to be
blocked and you need to allocate 32k (or less if more is blocked). For
all previous 32k chunks to be blocked every second 16k needs to be
blocked. To block the last of those 16k chunks all previous 8k chunks
need to be blocked and you need to allocate 8k. For all previous 8k
chunks to be blocked every second 4k page needs to be used. To alloc
the last of those 4k pages all previous 4k pages need to be used.

So to construct a situation where no continious 64k chunk is free you
have to allocate <total mem> - 64k - 32k - 16k - 8k - 4k (or there
about) of memory first. Only then could you free memory again while
still keeping every 64k page blocked. Does that occur naturally given
enough ram to start with?



Too see how bad fragmentation could be I wrote a little progamm to
simulate allocations with the following simplified alogrithm:

Memory management:
- Free pages are kept in buckets, one per order, and sorted by address.
- alloc() the front page (smallest address) out of the bucket of the
  right order or recursively splits the next higher bucket.
- free() recursively tries to merge a page with its neighbour and puts
  the result back into the proper bucket (sorted by address).

Allocation and lifetime:
- Every tick a new page is allocated with random order.
- The order is a triangle distribution with max at 0 (throw 2 dice,
  add the eyes, subtract 7, abs() the number).
- The page is scheduled to be freed after X ticks. Where X is nearly
  a gaus curve centered at 0 and maximum at <total num pages> * 1.5.
  (What I actualy do is throw 8 dice and sum them up and shift the
   result.)

Display:
I start with a white window. Every page allocation draws a black box
from the address of the page and as wide as the page is big (-1 pixel to
give a seperation to the next page). Every page free draws a yellow
box in place of the black one. Yellow to show where a page was in use
at one point while white means the page was never used.

As the time ticks the memory fills up. Quickly at first and then comes
to a stop around 80% filled. And then something interesting
happens. The yellow regions (previously used but now free) start
drifting up. Small pages tend to end up in the lower addresses and big
pages at the higher addresses. The memory defragments itself to some
degree.

http://mrvn.homeip.net/fragment/

Simulating 256MB ram and after 1472943 ticks and 530095 4k, 411841 8k,
295296 16k, 176647 32k and 59064 64k allocations you get this:
http://mrvn.homeip.net/fragment/256mb.png

Simulating 1GB ram and after 5881185 ticks  and 2116671 4k, 1645957
8k, 1176994 16k, 705873 32k and 235690 64k allocations you get this:
http://mrvn.homeip.net/fragment/1gb.png

MfG
        Goswin
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ