lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200801280710.08204.ak@suse.de>
Date:	Mon, 28 Jan 2008 07:10:07 +0100
From:	Andi Kleen <ak@...e.de>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] Only print kernel debug information for OOMs caused by kernel allocations

On Monday 28 January 2008 06:52, Andrew Morton wrote:
> On Wed, 16 Jan 2008 23:24:21 +0100 Andi Kleen <ak@...e.de> wrote:
> > I recently suffered an 20+ minutes oom thrash disk to death and computer
> > completely unresponsive situation on my desktop when some user program
> > decided to grab all memory. It eventually recovered, but left lots
> > of ugly and imho misleading messages in the kernel log. here's a minor
> > improvement

As a followup this was with swap over dm crypt. I've recently heard
about other people having trouble with this too so this setup seems to trigger
something bad in the VM.

> That information is useful for working out why a userspace allocation
> attempt failed.  If we don't print it, and the application gets killed and
> thus frees a lot of memory, we will just never know why the allocation
> failed.

But it's basically only either page fault (direct or indirect) or write et.al.
who do these page cache allocations. Do you really think it is that important
to distingush these cases individually? In 95+% of all cases it should
be a standard user page fault which always has the same backtrace.

To figure out why the application really oom'ed for those you would
need a user level backtrace, but the message doesn't supply that one anyways. 

All other cases will still print the full backtrace so if some kernel 
subsystem runs amok it should be still possible to diagnose it.

Please reconsider.

>
> >  struct page *__page_cache_alloc(gfp_t gfp)
> >  {
> > +	struct task_struct *me = current;
> > +	unsigned old = (~me->flags) & PF_USER_ALLOC;
> > +	struct page *p;
> > +
> > +	me->flags |= PF_USER_ALLOC;
> >  	if (cpuset_do_page_mem_spread()) {
> >  		int n = cpuset_mem_spread_node();
> > -		return alloc_pages_node(n, gfp, 0);
> > -	}
> > -	return alloc_pages(gfp, 0);
> > +		p = alloc_pages_node(n, gfp, 0);
> > +	} else
> > +		p = alloc_pages(gfp, 0);
> > +	/* Clear USER_ALLOC if it wasn't set originally */
> > +	me->flags ^= old;
> > +	return p;
> >  }
>
> That's appreciable amount of new overhead for at best a fairly marginal
> benefit.  Perhaps __GFP_USER could be [re|ab]used.

It's a few non atomic bit operations. You really think that is considerable
overhead? Also all should be cache hot already. My guess is that even with the 
additional function call it's < 10 cycles more.

> Alternatively: if we've printed the diagnostic on behalf of this process
> and then decided to kill it, set some flag to prevent us from printing it
> again.

Do you really think that would help?  I thought these messages came usually
from different processes.

-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ