lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Mar 2021 20:21:37 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Florent Revest <revest@...omium.org>
Cc:     bpf@...r.kernel.org, ast@...nel.org, daniel@...earbox.net,
        andrii@...nel.org, yhs@...com, kpsingh@...nel.org,
        jackmanb@...omium.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next 2/5] bpf: Add a bpf_snprintf helper

On Wed, Mar 10, 2021 at 11:02:08PM +0100, Florent Revest wrote:
>  
> +struct bpf_snprintf_buf {
> +	char buf[MAX_SNPRINTF_MEMCPY][MAX_SNPRINTF_STR_LEN];
> +};
> +static DEFINE_PER_CPU(struct bpf_snprintf_buf, bpf_snprintf_buf);
> +static DEFINE_PER_CPU(int, bpf_snprintf_buf_used);
> +
> +BPF_CALL_5(bpf_snprintf, char *, out, u32, out_size, char *, fmt, u64 *, args,
> +	   u32, args_len)
> +{
> +	int err, i, buf_used, copy_size, fmt_cnt = 0, memcpy_cnt = 0;
> +	u64 params[MAX_SNPRINTF_VARARGS];
> +	struct bpf_snprintf_buf *bufs;
> +
> +	buf_used = this_cpu_inc_return(bpf_snprintf_buf_used);
> +	if (WARN_ON_ONCE(buf_used > 1)) {

this can trigger only if the helper itself gets preempted and
another bpf prog will run on the same cpu and will call into this helper
again, right?
If so, how about adding preempt_disable here to avoid this case?
It won't prevent the case where kprobe is inside snprintf core,
so the counter is still needed, but it wouldn't trigger by accident.
Also since bufs are not used always, how about grabbing the
buffers only when %p or %s are seen in fmt?
After snprintf() is done it would conditionally do:
if (bufs_were_used) {
   this_cpu_dec(bpf_snprintf_buf_used);
   preempt_enable();
}
This way simple bpf_snprintf won't ever hit EBUSY.

> +		err = -EBUSY;
> +		goto out;
> +	}
> +
> +	bufs = this_cpu_ptr(&bpf_snprintf_buf);
> +
> +	/*
> +	 * The verifier has already done most of the heavy-work for us in
> +	 * check_bpf_snprintf_call. We know that fmt is well formatted and that
> +	 * args_len is valid. The only task left is to convert some of the
> +	 * arguments. For the %s and %pi* specifiers, we need to read buffers
> +	 * from a kernel address during the helper call.
> +	 */
> +	for (i = 0; fmt[i] != '\0'; i++) {
> +		if (fmt[i] != '%')
> +			continue;
> +
> +		if (fmt[i + 1] == '%') {
> +			i++;
> +			continue;
> +		}
> +
> +		/* fmt[i] != 0 && fmt[last] == 0, so we can access fmt[i + 1] */
> +		i++;
> +
> +		/* skip optional "[0 +-][num]" width formating field */
> +		while (fmt[i] == '0' || fmt[i] == '+'  || fmt[i] == '-' ||
> +		       fmt[i] == ' ')
> +			i++;
> +		if (fmt[i] >= '1' && fmt[i] <= '9') {
> +			i++;
> +			while (fmt[i] >= '0' && fmt[i] <= '9')
> +				i++;
> +		}
> +
> +		if (fmt[i] == 's') {
> +			void *unsafe_ptr = (void *)(long)args[fmt_cnt];
> +
> +			err = strncpy_from_kernel_nofault(bufs->buf[memcpy_cnt],
> +							  unsafe_ptr,
> +							  MAX_SNPRINTF_STR_LEN);
> +			if (err < 0)
> +				bufs->buf[memcpy_cnt][0] = '\0';
> +			params[fmt_cnt] = (u64)(long)bufs->buf[memcpy_cnt];

how about:
char buf[512]; instead?
instead of memcpy_cnt++ remember how many bytes of the buf were used and
copy next arg after that.
The scratch space would be used more efficiently.
The helper would potentially return ENOSPC if the first string printed via %s
consumed most of the 512 space and the second string doesn't fit.
But the verifier-time if (memcpy_cnt >= MAX_SNPRINTF_MEMCPY) can be removed.
Ten small %s will work fine.

We can allocate a page per-cpu when this helper is used by prog and free
that page when all progs with bpf_snprintf are unloaded.
But extra complexity is probably not worth it. I would start with 512 per-cpu.
It's going to be enough for most users.

Overall looks great. Cannot wait for v2 :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ