lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <daa3f02a-c982-4a7a-afcd-41f5e9b2f79c@linux.dev>
Date: Fri, 11 Apr 2025 20:47:36 -0700
From: Martin KaFai Lau <martin.lau@...ux.dev>
To: Jordan Rife <jordan@...fe.io>
Cc: Aditi Ghag <aditi.ghag@...valent.com>,
 Daniel Borkmann <daniel@...earbox.net>,
 Willem de Bruijn <willemdebruijn.kernel@...il.com>,
 Kuniyuki Iwashima <kuniyu@...zon.com>, bpf@...r.kernel.org,
 netdev@...r.kernel.org
Subject: Re: [PATCH v2 bpf-next 2/5] bpf: udp: Propagate ENOMEM up from
 bpf_iter_udp_batch

On 4/11/25 4:31 PM, Jordan Rife wrote:
>> The resized == true case will have a similar issue. Meaning the next
>> bpf_iter_udp_batch() will end up skipping the remaining sk in that bucket, e.g.
>> the partial-bucket batch has been consumed, so cur_sk == end_sk but
>> st_bucket_done == false and bpf_iter_udp_resume() returns NULL. It is sort of a
>> regression from the current "offset" implementation for this case. Any thought
>> on how to make it better?
> 
> Are you referring to the case where the bucket grows in size so much
> between releasing and reacquiring the bucket's lock to where we still
> can't fit all sockets into our batch even after a
> bpf_iter_udp_realloc_batch()? If so, I think we touched on this a bit
> in [1]:

Right, and it is also the same as the kvmalloc failure case that this patch is 
handling. Let see if it can be done better without returning error in both cases.

> 1) Loop until iter->end_sk == batch_sks, possibly calling realloc a
> couple times. The unbounded loop is a bit worrying; I guess
> bpf_iter_udp_batch could "race" if the bucket size keeps growing here.
> 2) Loop some bounded number of times and return some ERR_PTR(x) if the
> loop can't keep up after a few tries so we don't break the invariant
> that the batch is always a full snapshot of a bucket.
> 3) Set some flag in the iterator state, e.g. iter->is_partial,
> indicating to the next call to bpf_iter_udp_realloc_batch() that the
> last batch was actually partial and that if it can't find any of the
> cookies from last time it should start over from the beginning of the
> bucket instead of advancing to the next bucket. This may repeat
> sockets we've already seen in the worst case, but still better than
> skipping them.

Probably something like (3) but I don't think it needs a new "is_partial". The 
existing "st_bucket_done" should do.

How about for the "st_bucket_done == false" case, it also stores the
cookie before advancing the cur_sk in bpf_iter_udp_seq_next().

not-compiled code:

static void *bpf_iter_udp_seq_next(struct seq_file *seq, void *v, loff_t *pos)
{
	if (iter->cur_sk < iter->end_sk) {
		u64 cookie;

		cookie = iter->st_bucket_done ?
			0 : __sock_gen_cookie(iter->batch[iter->cur_sk].sock);
		sock_put(iter->batch[iter->cur_sk].sock);
		iter->batch[iter->cur_sk++].cookie = cookie;
	}

	/* ... */
}

In bpf_iter_udp_resume(), if it cannot find the first sk from find_cookie to 
end_cookie, then it searches backward from find_cookie to 0. If nothing found, 
then it should start from the beginning of the resume_bucket. Would it work?



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ