[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251028053330.2391078-1-kuniyu@google.com>
Date: Tue, 28 Oct 2025 05:32:13 +0000
From: Kuniyuki Iwashima <kuniyu@...gle.com>
To: dave.hansen@...el.com
Cc: alex@...ti.fr, aou@...s.berkeley.edu, axboe@...nel.dk, bp@...en8.de,
brauner@...nel.org, catalin.marinas@....com, christophe.leroy@...roup.eu,
dave.hansen@...ux.intel.com, edumazet@...gle.com, hpa@...or.com,
kuni1840@...il.com, kuniyu@...gle.com, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, maddy@...ux.ibm.com, mingo@...hat.com,
mpe@...erman.id.au, npiggin@...il.com, palmer@...belt.com, pjw@...nel.org,
tglx@...utronix.de, torvalds@...ux-foundation.org, will@...nel.org,
x86@...nel.org
Subject: Re: [PATCH v1 2/2] epoll: Use __user_write_access_begin() and
unsafe_put_user() in epoll_put_uevent().
From: Dave Hansen <dave.hansen@...el.com>
Date: Fri, 24 Oct 2025 07:05:50 -0700
> On 10/23/25 22:16, Kuniyuki Iwashima wrote:
> >> This makes me nervous. The access_ok() check is quite a distance away.
> >> I'd kinda want to see some performance numbers before doing this. Is
> >> removing a single access_ok() even measurable?
> > I noticed I made a typo in commit message, s/tcp_rr/udp_rr/.
> >
> > epoll_put_uevent() can be called multiple times in a single
> > epoll_wait(), and we can see 1.7% more pps on UDP even when
> > 1 thread has 1000 sockets only:
> >
> > server: $ udp_rr --nolog -6 -F 1000 -T 1 -l 3600
> > client: $ udp_rr --nolog -6 -F 1000 -T 256 -l 3600 -c -H $SERVER
> > server: $ nstat > /dev/null; sleep 10; nstat | grep -i udp
> >
> > Without patch (2 stac/clac):
> > Udp6InDatagrams 2205209 0.0
> >
> > With patch (1 stac/clac):
> > Udp6InDatagrams 2242602 0.0
>
> I'm totally with you about removing a stac/clac:
>
> https://lore.kernel.org/lkml/20250228203722.CAEB63AC@davehans-spike.ostc.intel.com/
>
> The thing I'm worried about is having the access_ok() so distant
> from the unsafe_put_user(). I'm wondering if this:
>
> - __user_write_access_begin(uevent, sizeof(*uevent));
> + if (!user_write_access_begin(uevent, sizeof(*uevent))
> + return NULL;
> unsafe_put_user(revents, &uevent->events, efault);
> unsafe_put_user(data, &uevent->data, efault);
> user_access_end();
>
> is measurably slower than what was in your series. If it is
> not measurably slower, then the series gets simpler because it
> does not need to refactor user_write_access_begin(). It also ends
> up more obviously correct because the access check is closer to
> the unsafe_put_user() calls.
>
> Also, the extra access_ok() is *much* cheaper than stac/clac.
Sorry for the late!
I rebased on 19ab0a22efbd and tested 4 versions on
AMD EPYC 7B12 machine:
1) Base 19ab0a22efbd
2) masked_user_access_begin()
-> 97% pps and 96% calls of ep_try_send_events()
3) user_write_access_begin() (Dave's diff above) (NEW)
-> 102.2% pps and 103% calls of ep_try_send_events()
4) __user_write_access_begin() (This patch)
-> 102.4% pps and 103% calls of ep_try_send_events().
Interestingly user_write_access_begin() was as fast as
__user_write_access_begin() !
Also, as with the previous result, masked_user_access_begin()
was the worst somehow.
So, I'll drop patch 1 and post v2 with user_write_access_begin().
Thank you!
1) Base (19ab0a22efbd)
# nstat > /dev/null; sleep 10; nstat | grep -i udp
Udp6InDatagrams 2184011 0.0
@ep_try_send_events_ns:
[256, 512) 2796601 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K) 627863 |@@@@@@@@@@@ |
[1K, 2K) 166403 |@@@ |
[2K, 4K) 10437 | |
[4K, 8K) 1396 | |
[8K, 16K) 116 | |
2) masked_user_access_begin() + masked_user_access_begin()
97% pps compared to 1).
96% calls of ep_try_send_events().
# nstat > /dev/null; sleep 10; nstat | grep -i udp
Udp6InDatagrams 2120498 0.0
@ep_try_send_events_ns:
[256, 512) 2690803 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K) 533750 |@@@@@@@@@@ |
[1K, 2K) 225969 |@@@@ |
[2K, 4K) 35176 | |
[4K, 8K) 2428 | |
[8K, 16K) 199 | |
3) user_write_access_begin()
102.2% pps compared to 1).
103% calls of ep_try_send_events().
# nstat > /dev/null; sleep 10; nstat | grep -i udp
Udp6InDatagrams 2232730 0.0
@ep_try_send_events_ns:
[256, 512) 2900655 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K) 622045 |@@@@@@@@@@@ |
[1K, 2K) 172831 |@@@ |
[2K, 4K) 17687 | |
[4K, 8K) 1103 | |
[8K, 16K) 174 | |
4) __user_write_access_begin()
102.4% pps compared to 1).
103% calls of ep_try_send_events().
# nstat > /dev/null; sleep 10; nstat | grep -i udp
Udp6InDatagrams 2238524 0.0
@ep_try_send_events_ns:
[256, 512) 2906752 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K) 630199 |@@@@@@@@@@@ |
[1K, 2K) 161741 |@@ |
[2K, 4K) 17141 | |
[4K, 8K) 1041 | |
[8K, 16K) 61 | |
Powered by blists - more mailing lists