lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87c977f7-1c4d-a21d-8b7b-2d2b8fb317bb@intel.com>
Date:   Fri, 13 Dec 2019 15:06:50 -0700
From:   Dave Jiang <dave.jiang@...el.com>
To:     Andrew Morton <akpm@...ux-foundation.org>, axboe@...nel.dk
Cc:     dmaengine@...r.kernel.org, linux-kernel@...r.kernel.org,
        vkoul@...nel.org, dan.j.williams@...el.com, tony.luck@...el.com,
        jing.lin@...el.com, ashok.raj@...el.com, sanjay.k.kumar@...el.com,
        megha.dey@...el.com, jacob.jun.pan@...el.com, yi.l.liu@...el.com,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
        fenghua.yu@...el.com, hpa@...or.com
Subject: Re: [PATCH RFC v2 04/14] mm: create common code from request
 allocation based from blk-mq code

On 12/12/19 5:43 PM, Andrew Morton wrote:
> On Thu, 12 Dec 2019 11:24:34 -0700 Dave Jiang <dave.jiang@...el.com> wrote:
> 
>> Move the allocation of requests from compound pages to a common function
>> to allow usages by other callers.
> 
> What other callers are expected?

I'm introducing usage in the dmaengine subsystem. So it will be block 
and dmaengine subsystem so far.

> 
>> Since the routine has more to do with
>> memory allocation and management, it is moved to be exported by the
>> mempool.h and be part of mm subsystem.
>>
> 
> Hm, this move doesn't seem to fit very well.  But perhaps it's close
> enough.

I'm open to suggestions.

> 
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -42,7 +42,7 @@ obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
>>   			   mm_init.o mmu_context.o percpu.o slab_common.o \
>>   			   compaction.o vmacache.o \
>>   			   interval_tree.o list_lru.o workingset.o \
>> -			   debug.o gup.o $(mmu-y)
>> +			   debug.o gup.o request_alloc.o $(mmu-y)
> 
> Now there's a regression.  We're adding a bunch of unused code to a
> CONFIG_BLOCK=n kernel.

I will introduce a KCONFIG value and have block and dmaegine select it 
to build this new code in when needed.

> 
>>
>> ...
>>
>> +void request_from_pages_free(struct list_head *page_list)
>>
>> ...
>>
>> +int request_from_pages_alloc(void *ctx, unsigned int depth, size_t rq_size,
>> +			     struct list_head *page_list, int max_order,
>> +			     int node,
>> +			     void (*assign)(void *ctx, void *req, int idx))
> 
> I find these function names hard to understand.  Are they well chosen?

The names could probably be better. Maybe
context_alloc_from_pages() and context_free_from_pages()?

So the background of this was I saw a block of code in block-mq that I 
can utilize in the new dmanegine request allocation. The code block is 
for mq to allocate pre-allocate requests from pages. I contacted Jens 
and he was ok with moving it to a common location out of blk-mq code. 
Maybe the name request is too specific and for a "general" allocator 
helper does not make sense.

> 
> Some documentation would help.  These are global, exported-to-modules
> API functions and they really should be fully documented.

Yes I will add more comments in regard to this function.

Jens, since you are the original write of this code, do you mind 
providing some commentary as to some of the quirks of this code block? I 
can do my best to try to explain it but you probably know the the code 
much better than me. Thanks!

> 
>> +{
>> +	size_t left;
>> +	unsigned int i, j, entries_per_page;
>> +
>> +	left = rq_size * depth;
>> +
>> +	for (i = 0; i < depth; ) {
> 
> "depth" of what?
> 
>> +		int this_order = max_order;
>> +		struct page *page;
>> +		int to_do;
>> +		void *p;
>> +
>> +		while (this_order && left < order_to_size(this_order - 1))
>> +			this_order--;
>> +
>> +		do {
>> +			page = alloc_pages_node(node,
>> +						GFP_NOIO | __GFP_NOWARN |
>> +						__GFP_NORETRY | __GFP_ZERO,
>> +						this_order);
>> +			if (page)
>> +				break;
>> +			if (!this_order--)
>> +				break;
>> +			if (order_to_size(this_order) < rq_size)
>> +				break;
>> +		} while (1);
> 
> What the heck is all the above trying to do?  Some explanatory comments
> are needed, methinks.
> 
>> +		if (!page)
>> +			goto fail;
>> +
>> +		page->private = this_order;
>> +		list_add_tail(&page->lru, page_list);
>> +
>> +		p = page_address(page);
>> +		/*
>> +		 * Allow kmemleak to scan these pages as they contain pointers
>> +		 * to additional allocations like via ops->init_request().
>> +		 */
>> +		kmemleak_alloc(p, order_to_size(this_order), 1, GFP_NOIO);
>> +		entries_per_page = order_to_size(this_order) / rq_size;
>> +		to_do = min(entries_per_page, depth - i);
>> +		left -= to_do * rq_size;
>> +		for (j = 0; j < to_do; j++) {
>> +			assign((void *)ctx, p, i);
>> +			p += rq_size;
>> +			i++;
>> +		}
>> +	}
>> +
>> +	return i;
>> +
>> +fail:
>> +	request_from_pages_free(page_list);
>> +	return -ENOMEM;
>> +}
>> +EXPORT_SYMBOL_GPL(request_from_pages_alloc);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ