lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d6a94c50703300738i7b9d3fcn341feb08dca8817d@mail.gmail.com>
Date:	Fri, 30 Mar 2007 22:38:32 +0800
From:	"Aubrey Li" <aubreylee@...il.com>
To:	"Alan Cox" <alan@...rguk.ukuu.org.uk>
Cc:	"David Howells" <dhowells@...hat.com>, vapier.adi@...il.com,
	jie.zhang@...log.com, bryan.wu@...log.com,
	"Andrew Morton" <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nommu arch dont zero the anonymous mapping by adding UNINITIALIZE flag

On 3/30/07, Alan Cox <alan@...rguk.ukuu.org.uk> wrote:
> > I can't find mmap must give zeroed memory in the mmap manual.
> > Is there any reason relying on anon mmap() giving zerod memory?
>
> Its how all Unix/Linux like systems behave.
Fair enough.


> You have to clear the memory
> to something to deal with security on any kind of real system, and zero
> is a good a value as any
>
> > > Personally, I'd prefer to maintain compatibility with MMU-mode wherever
> > > possible, but I'm happy with overrides like the MAP_UNINITIALISED flag
> > > suggested.
> > >
> > Not necessary IMHO.
>
> mmap() for anonymous memory pools should not normally be a hot path,
> because the C library malloc is supposed to show some brains. If you need
> special behaviour (eg for performance hacks) then create yourself
> a /dev/zero like device which just maps uncleared pages. Its not much
> code and it keeps special cases out of the core kernel.
>
On MMU-mode, /dev/zero's-->mmap gives zerod memory.
On NO-MMU-mode, we have to do memset(,,,) to give zerod memory. So the
different malloc size has the different time cost.
So Bryan's test case shows the terrible performance result.
---------------------------------------------------------------
Summary is, when I run the app "time test",

on x86:
real    0m0.066s
user    0m0.008s
sys     0m0.058s

on Blackfin:
real    3m 37.69s
user    0m 0.04s
sys     3m 37.58s
---------------------------------------------------------------
So now the question is,
Keep the same behave as MMU but with bad performance, or keep the same
performance as MMU but without the same behave, Which one is more
important?

-Aubrey
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ