lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Mar 2016 09:43:57 +0100
From:	Michal Hocko <mhocko@...nel.org>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Vladimir Davydov <vdavydov@...tuozzo.com>,
	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
	kernel-team@...com
Subject: Re: [PATCH] mm: memcontrol: reclaim and OOM kill when shrinking
 memory.max below usage

On Tue 15-03-16 22:18:48, Johannes Weiner wrote:
> On Fri, Mar 11, 2016 at 12:19:31PM +0300, Vladimir Davydov wrote:
> > On Fri, Mar 11, 2016 at 09:18:25AM +0100, Michal Hocko wrote:
> > > On Thu 10-03-16 15:50:14, Johannes Weiner wrote:
> > ...
> > > > @@ -5037,9 +5040,36 @@ static ssize_t memory_max_write(struct kernfs_open_file *of,
> > > >  	if (err)
> > > >  		return err;
> > > >  
> > > > -	err = mem_cgroup_resize_limit(memcg, max);
> > > > -	if (err)
> > > > -		return err;
> > > > +	xchg(&memcg->memory.limit, max);
> > > > +
> > > > +	for (;;) {
> > > > +		unsigned long nr_pages = page_counter_read(&memcg->memory);
> > > > +
> > > > +		if (nr_pages <= max)
> > > > +			break;
> > > > +
> > > > +		if (signal_pending(current)) {
> > > 
> > > Didn't you want fatal_signal_pending here? At least the changelog
> > > suggests that.
> > 
> > I suppose the user might want to interrupt the write by hitting CTRL-C.
> 
> Yeah. This is the same thing we do for the current limit setting loop.

Yes we do but then the operation is canceled without any change. Now
re-reading the changelog I've realized I have misread the "we run out of
OOM victims and there's only unreclaimable memory left, or the task
writing to memory.max is killed." part and considered task writing... is
OOM killed.
 
> > Come to think of it, shouldn't we restore the old limit and return EBUSY
> > if we failed to reclaim enough memory?
> 
> I suspect it's very rare that it would fail. But even in that case
> it's probably better to at least not allow new charges past what the
> user requested, even if we can't push the level back far enough.

I guess you are right. This guarantee is indeed useful.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ