lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240927171838.GA15877@redhat.com>
Date: Fri, 27 Sep 2024 19:18:39 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Liao Chang <liaochang1@...wei.com>
Cc: ak@...ux.intel.com, mhiramat@...nel.org, andrii@...nel.org,
	peterz@...radead.org, mingo@...hat.com, acme@...nel.org,
	namhyung@...nel.org, mark.rutland@....com,
	alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
	irogers@...gle.com, adrian.hunter@...el.com,
	kan.liang@...ux.intel.com, linux-kernel@...r.kernel.org,
	linux-trace-kernel@...r.kernel.org,
	linux-perf-users@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [PATCH v2] uprobes: Improve the usage of xol slots for better
 scalability

On 09/27, Liao Chang wrote:
>
> +int recycle_utask_slot(struct uprobe_task *utask, struct xol_area *area)
> +{
> +	int slot = UINSNS_PER_PAGE;
> +
> +	/*
> +	 * Ensure that the slot is not in use on other CPU. However, this
> +	 * check is unnecessary when called in the context of an exiting
> +	 * thread. See xol_free_insn_slot() called from uprobe_free_utask()
> +	 * for more details.
> +	 */
> +	if (test_and_put_task_slot(utask)) {
> +		list_del(&utask->gc);
> +		clear_bit(utask->insn_slot, area->bitmap);
> +		atomic_dec(&area->slot_count);
> +		utask->insn_slot = UINSNS_PER_PAGE;
> +		refcount_set(&utask->slot_ref, 1);

This lacks a barrier, CPU can reorder the last 2 insns

		refcount_set(&utask->slot_ref, 1);
		utask->insn_slot = UINSNS_PER_PAGE;

so the "utask->insn_slot == UINSNS_PER_PAGE" check in xol_get_insn_slot()
can be false negative.

> +static unsigned long xol_get_insn_slot(struct uprobe_task *utask,
> +				       struct uprobe *uprobe)
>  {
>  	struct xol_area *area;
>  	unsigned long xol_vaddr;
> @@ -1665,16 +1740,46 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe)
>  	if (!area)
>  		return 0;
>
> -	xol_vaddr = xol_take_insn_slot(area);
> -	if (unlikely(!xol_vaddr))
> +	/*
> +	 * The racing on the utask associated slot_ref can occur unless the
> +	 * area runs out of slots. This isn't a common case. Even if it does
> +	 * happen, the scalability bottleneck will shift to another point.
> +	 */

I don't understand the comment, I guess it means the race with
recycle_utask_slot() above.

> +	if (!test_and_get_task_slot(utask))
>  		return 0;

No, we can't do this. xol_get_insn_slot() should never fail.

OK, OK, currently xol_get_insn_slot() _can_ fail, but only if get_xol_area()
fails to allocate the memory. Which should "never" happen and we can do nothing
in this case anyway.

But it certainly must not fail if it races with another thread, this is insane.

And. This patch changes the functions which ask for cleanups. I'll try to send
a couple of simple patches on Monday.

Oleg.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ