lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4764dcbf-c735-bbe2-b60e-b64c789ffbe6@kernel.dk>
Date:   Tue, 8 Nov 2022 15:20:27 -0700
From:   Jens Axboe <axboe@...nel.dk>
To:     Soheil Hassas Yeganeh <soheil@...gle.com>,
        linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Cc:     Willem de Bruijn <willemb@...gle.com>,
        Shakeel Butt <shakeelb@...gle.com>
Subject: Re: [PATCH 6/6] eventpoll: add support for min-wait

On 11/8/22 3:14 PM, Soheil Hassas Yeganeh wrote:
> On Sun, Oct 30, 2022 at 04:02:03PM -0600, Jens Axboe wrote:
>> Rather than just have a timeout value for waiting on events, add
>> EPOLL_CTL_MIN_WAIT to allow setting a minimum time that epoll_wait()
>> should always wait for events to arrive.
>>
>> For medium workload efficiencies, some production workloads inject
>> artificial timers or sleeps before calling epoll_wait() to get
>> better batching and higher efficiencies. While this does help, it's
>> not as efficient as it could be. By adding support for epoll_wait()
>> for this directly, we can avoids extra context switches and scheduler
>> and timer overhead.
>>
>> As an example, running an AB test on an identical workload at about
>> ~370K reqs/second, without this change and with the sleep hack
>> mentioned above (using 200 usec as the timeout), we're doing 310K-340K
>> non-voluntary context switches per second. Idle CPU on the host is 27-34%.
>> With the the sleep hack removed and epoll set to the same 200 usec
>> value, we're handling the exact same load but at 292K-315k non-voluntary
>> context switches and idle CPU of 33-41%, a substantial win.
>>
>> Basic test case:
>>
>> struct d {
>>         int p1, p2;
>> };
>>
>> static void *fn(void *data)
>> {
>>         struct d *d = data;
>>         char b = 0x89;
>>
>> 	/* Generate 2 events 20 msec apart */
>>         usleep(10000);
>>         write(d->p1, &b, sizeof(b));
>>         usleep(10000);
>>         write(d->p2, &b, sizeof(b));
>>
>>         return NULL;
>> }
>>
>> int main(int argc, char *argv[])
>> {
>>         struct epoll_event ev, events[2];
>>         pthread_t thread;
>>         int p1[2], p2[2];
>>         struct d d;
>>         int efd, ret;
>>
>>         efd = epoll_create1(0);
>>         if (efd < 0) {
>>                 perror("epoll_create");
>>                 return 1;
>>         }
>>
>>         if (pipe(p1) < 0) {
>>                 perror("pipe");
>>                 return 1;
>>         }
>>         if (pipe(p2) < 0) {
>>                 perror("pipe");
>>                 return 1;
>>         }
>>
>>         ev.events = EPOLLIN;
>>         ev.data.fd = p1[0];
>>         if (epoll_ctl(efd, EPOLL_CTL_ADD, p1[0], &ev) < 0) {
>>                 perror("epoll add");
>>                 return 1;
>>         }
>>         ev.events = EPOLLIN;
>>         ev.data.fd = p2[0];
>>         if (epoll_ctl(efd, EPOLL_CTL_ADD, p2[0], &ev) < 0) {
>>                 perror("epoll add");
>>                 return 1;
>>         }
>>
>> 	/* always wait 200 msec for events */
>>         ev.data.u64 = 200000;
>>         if (epoll_ctl(efd, EPOLL_CTL_MIN_WAIT, -1, &ev) < 0) {
>>                 perror("epoll add set timeout");
>>                 return 1;
>>         }
>>
>>         d.p1 = p1[1];
>>         d.p2 = p2[1];
>>         pthread_create(&thread, NULL, fn, &d);
>>
>> 	/* expect to get 2 events here rather than just 1 */
>>         ret = epoll_wait(efd, events, 2, -1);
>>         printf("epoll_wait=%d\n", ret);
>>
>>         return 0;
>> }
> 
> It might be worth adding a note in the commit message stating that
> EPOLL_CTL_MIN_WAIT is a no-op when timeout is 0. This is a desired
> behavior but it's not easy to see in the flow.

True, will do.

>> +struct epoll_wq {
>> +	wait_queue_entry_t wait;
>> +	struct hrtimer timer;
>> +	ktime_t timeout_ts;
>> +	ktime_t min_wait_ts;
>> +	struct eventpoll *ep;
>> +	bool timed_out;
>> +	int maxevents;
>> +	int wakeups;
>> +};
>> +
>> +static bool ep_should_min_wait(struct epoll_wq *ewq)
>> +{
>> +	if (ewq->min_wait_ts & 1) {
>> +		/* just an approximation */
>> +		if (++ewq->wakeups >= ewq->maxevents)
>> +			goto stop_wait;
> 
> Is there a way to short cut the wait if the process is being terminated?
> 
> We issues in production systems in the past where too many threads were
> in epoll_wait and the process got terminated.  It'd be nice if these
> threads could exit the syscall as fast as possible.

Good point, it'd be a bit racy though as this is called from the waitq
callback and hence not in the task itself. But probably Good Enough for
most use cases?

This should probably be a separate patch though, as it seems this
affects regular waits too without min_wait set?

>> @@ -1845,6 +1891,18 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
>>  		ewq.timed_out = true;
>>  	}
>>  
>> +	/*
>> +	 * If min_wait is set for this epoll instance, note the min_wait
>> +	 * time. Ensure the lowest bit is set in ewq.min_wait_ts, that's
>> +	 * the state bit for whether or not min_wait is enabled.
>> +	 */
>> +	if (ep->min_wait_ts) {
> 
> Can we limit this block to "ewq.timed_out && ep->min_wait_ts"?
> AFAICT, the code we run here is completely wasted if timeout is 0.

Yep certainly, I can gate it on both of those conditions.

>> diff --git a/include/uapi/linux/eventpoll.h b/include/uapi/linux/eventpoll.h
>> index 8a3432d0f0dc..81ecb1ca36e0 100644
>> --- a/include/uapi/linux/eventpoll.h
>> +++ b/include/uapi/linux/eventpoll.h
>> @@ -26,6 +26,7 @@
>>  #define EPOLL_CTL_ADD 1
>>  #define EPOLL_CTL_DEL 2
>>  #define EPOLL_CTL_MOD 3
>> +#define EPOLL_CTL_MIN_WAIT	4
> 
> Have you considered introducing another epoll_pwait sycall variant?
> 
> That has a major benefit that min wait can be different per poller,
> on the different epollfd.  The usage would also be more readable:
> 
> "epoll for X amount of time but don't return sooner than Y."
> 
> This would be similar to the approach that willemb@...gle.com used
> when introducing epoll_pwait2.

I have, see other replies in this thread, notably the ones with Stefan
today. Happy to do that, and my current branch does split out the ctl
addition from the meat of the min_wait support for this reason. Can't
seem to find a great way to do it, as we'd need to move to a struct
argument for this as epoll_pwait2() is already at max arguments for a
syscall. Suggestions more than welcome.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ