lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 28 Aug 2020 14:52:36 +0200
From:   peterz@...radead.org
To:     Masami Hiramatsu <mhiramat@...nel.org>
Cc:     linux-kernel@...r.kernel.org, Eddy_Wu@...ndmicro.com,
        x86@...nel.org, davem@...emloft.net, rostedt@...dmis.org,
        naveen.n.rao@...ux.ibm.com, anil.s.keshavamurthy@...el.com,
        linux-arch@...r.kernel.org, cameron@...dycamel.com,
        oleg@...hat.com, will@...nel.org, paulmck@...nel.org
Subject: Re: [PATCH v4 20/23] [RFC] kprobes: Remove task scan for updating
 kretprobe_instance


If you do this, can you merge this into the previos patch and then
delete the sched try_to_invoke..() patch?

Few comments below.

On Fri, Aug 28, 2020 at 09:30:17PM +0900, Masami Hiramatsu wrote:


> +static nokprobe_inline struct kretprobe *get_kretprobe(struct kretprobe_instance *ri)
> +{
> +	/* rph->rp can be updated by unregister_kretprobe() on other cpu */
> +	smp_rmb();
> +	return ri->rph->rp;
> +}

That ordering doesn't really make sense, ordering requires at least two
variables, here there is only 1. That said, get functions usually need
an ACQUIRE order to make sure subsequent accesses are indeed done later.

>  #else /* CONFIG_KRETPROBES */
>  static inline void arch_prepare_kretprobe(struct kretprobe *rp,
>  					struct pt_regs *regs)

> @@ -1922,6 +1869,7 @@ unsigned long __kretprobe_trampoline_handler(struct pt_regs *regs,
>  	kprobe_opcode_t *correct_ret_addr = NULL;
>  	struct kretprobe_instance *ri = NULL;
>  	struct llist_node *first, *node;
> +	struct kretprobe *rp;
>  
>  	first = node = current->kretprobe_instances.first;
>  	while (node) {
> @@ -1951,12 +1899,13 @@ unsigned long __kretprobe_trampoline_handler(struct pt_regs *regs,
>  	/* Run them..  */
>  	while (first) {
>  		ri = container_of(first, struct kretprobe_instance, llist);
> +		rp = get_kretprobe(ri);
>  		node = first->next;

(A)

> -		if (ri->rp && ri->rp->handler) {
> -			__this_cpu_write(current_kprobe, &ri->rp->kp);
> +		if (rp && rp->handler) {
> +			__this_cpu_write(current_kprobe, &rp->kp);
>  			ri->ret_addr = correct_ret_addr;
> -			ri->rp->handler(ri, regs);
> +			rp->handler(ri, regs);
>  			__this_cpu_write(current_kprobe, &kprobe_busy);
>  		}

So here we're using get_kretprobe(), but what is to stop anybody from
doing unregister_kretprobe() right at (A) such that we did observe our
rp, but by the time we use it, it's a goner.


> +	rp->rph = kzalloc(sizeof(struct kretprobe_holder), GFP_KERNEL);
> +	rp->rph->rp = rp;

I think you'll need to check the allocation succeeded, no? :-)


> @@ -2114,16 +2065,20 @@ void unregister_kretprobes(struct kretprobe **rps, int num)
>  	if (num <= 0)
>  		return;
>  	mutex_lock(&kprobe_mutex);
> -	for (i = 0; i < num; i++)
> +	for (i = 0; i < num; i++) {
>  		if (__unregister_kprobe_top(&rps[i]->kp) < 0)
>  			rps[i]->kp.addr = NULL;
> +		rps[i]->rph->rp = NULL;
> +	}
> +	/* Ensure the rph->rp updated after this */
> +	smp_wmb();
>  	mutex_unlock(&kprobe_mutex);

That ordering is dodgy again, those barriers don't help anything if
someone else is at (A) above.

>  
>  	synchronize_rcu();

This one might help, this means we can do rcu_read_lock() around
get_kretprobe() and it's usage. Can we call rp->handler() under RCU?

>  	for (i = 0; i < num; i++) {
>  		if (rps[i]->kp.addr) {
>  			__unregister_kprobe_bottom(&rps[i]->kp);
> -			cleanup_rp_inst(rps[i]);
> +			free_rp_inst(rps[i]);
>  		}
>  	}
>  }

Powered by blists - more mailing lists