lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 23 Feb 2021 11:14:10 -0800
From:   Dave Hansen <dave.hansen@...el.com>
To:     Jarkko Sakkinen <jarkko@...nel.org>, linux-sgx@...r.kernel.org
Cc:     haitao.huang@...el.com, dan.j.williams@...el.com,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/sgx: Add a basic NUMA allocation scheme to
 sgx_alloc_epc_page()

On 2/21/21 4:54 PM, Dave Hansen wrote:
> Instead of having a for-each-section loop, I'd make it for-each-node ->
> for-each-section.  Something like:
> 
> 	for (i = 0; i < num_possible_nodes(); i++) {
> 		node = (numa_node_id() + i) % num_possible_nodes()
> 		
> 		if (!node_isset(nid, sgx_numa_mask))
> 			continue;
> 
> 		list_for_each_entry(section, &sgx_numa_nodes[nid],
> 				    section_list) {
> 			__sgx_alloc_epc_page_from_section(section)
> 		}
> 	}

OK, here's an almost completely fleshed-out loop:

	page = NULL;
	node = numa_node_id();
	start_node = node;
	while (1) {
		list_for_each_entry(section, &sgx_numa_nodes[nid],
 				    section_list) {
 			page = __sgx_alloc_epc(section);
			if (page)
				break;
 		}
		if (page)
			break;
		
		/*
		 * EPC allocation failed on 'node'.  Fall
		 * back with round-robin to other nodes with
		 * EPC:
		 */
		node = next_node_in(node, sgx_numa_mask);

		/* Give up if allocation wraps back to the start: */
		if (node == start_node)
			break;
	}

This will:
1. Always start close to the CPU that started the allocation
2. Always spread the allocations out among nodes evenly, never
   concentrating allocations on node 0, for instance.  (This could also
   be node_random() and get a similar effect, but this probably has
   slightly better default NUMA behavior).
3. Efficiently look among all nodes because of 'sgx_numa_mask'
4. Have no special case for the first allocation.  All allocations will
   be satisfied from this unified loop.
5. Compile down to no loop on CONFIG_NUMA=y systems.
6. Be guaranteed to make forward progress even if preempted and
   numa_node_id() changes in the loop.

BTW, I think the name of __sgx_alloc_epc_page_from_section() can be
shortened down.  It's passed a section and returns a page, so both of
those could be removed from the name.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ