lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170104140314.37sg3ql2aoqvpgq5@techsingularity.net>
Date:   Wed, 4 Jan 2017 14:03:15 +0000
From:   Mel Gorman <mgorman@...hsingularity.net>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Linux Kernel <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH 4/4] mm, page_alloc: Add a bulk page allocator

On Wed, Jan 04, 2017 at 02:48:44PM +0100, Jesper Dangaard Brouer wrote:
> On Wed,  4 Jan 2017 11:10:49 +0000
> > The API is not guaranteed to return the requested number of pages and
> > may fail if the preferred allocation zone has limited free memory,
> > the cpuset changes during the allocation or page debugging decides
> > to fail an allocation. It's up to the caller to request more pages
> > in batch if necessary.
> 
> I generally like it, thanks! :-)
> 

No problem.

> > +	/*
> > +	 * Only attempt a batch allocation if watermarks on the preferred zone
> > +	 * are safe.
> > +	 */
> > +	zone = ac.preferred_zoneref->zone;
> > +	if (!zone_watermark_fast(zone, order, zone->watermark[ALLOC_WMARK_HIGH] + nr_pages,
> > +				 zonelist_zone_idx(ac.preferred_zoneref), alloc_flags))
> > +		goto failed;
> > +
> > +	/* Attempt the batch allocation */
> > +	migratetype = ac.migratetype;
> > +
> > +	local_irq_save(flags);
> 
> It would be a win if we could either use local_irq_{disable,enable} or
> preempt_{disable,enable} here, by dictating it can only be used from
> irq-safe context (like you did in patch 3).
> 

This was a stupid mistake during a rebase. I should have removed all the
IRQ-disabling entirely and made it only usable from non-IRQ context to
keep the motivation of patch 3 in place. It was a botched rebase of the
patch on top of patch 3 that wasn't properly tested. It still illustrates
the general shape at least. For extra safety, I should force it to return
just a single page if called from interrupt context.

Is bulk allocation from IRQ context a requirement? If so, the motivation
for patch 3 disappears which is a pity but IRQ safety has a price. The
shape of V2 depends on the answer.

> 
> > +	pcp = &this_cpu_ptr(zone->pageset)->pcp;
> > +	pcp_list = &pcp->lists[migratetype];
> > +
> > +	while (nr_pages) {
> > +		page = __rmqueue_pcplist(zone, order, gfp_mask, migratetype,
> > +								cold, pcp, pcp_list);
> > +		if (!page)
> > +			break;
> > +
> > +		nr_pages--;
> > +		alloced++;
> > +		list_add(&page->lru, alloc_list);
> > +	}
> > +
> > +	if (!alloced) {
> > +		local_irq_restore(flags);
> > +		preempt_enable();
> 
> The preempt_enable here looks wrong.
> 

It is because I screwed up the rebase.

-- 
Mel Gorman
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ