[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fc8664bb-7769-48a2-b470-71fb81828e26@kernel.dk>
Date: Fri, 23 Jan 2026 07:50:53 -0700
From: Jens Axboe <axboe@...nel.dk>
To: Pavel Begunkov <asml.silence@...il.com>,
Yuhao Jiang <danisjiang@...il.com>
Cc: io-uring@...r.kernel.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH v2] io_uring/rsrc: fix RLIMIT_MEMLOCK bypass by removing
cross-buffer accounting
On 1/23/26 7:26 AM, Pavel Begunkov wrote:
> On 1/22/26 21:51, Pavel Begunkov wrote:
> ...
>>>>> I already briefly touched on that earlier, for sure not going to be of
>>>>> any practical concern.
>>>>
>>>> Modest 16 GB can give 1M entries. Assuming 50ns-100ns per entry for the
>>>> xarray business, that's 50-100ms. It's all serialised, so multiply by
>>>> the number of CPUs/threads, e.g. 10-100, that's 0.5-10s. Account sky
>>>> high spinlock contention, and it jumps again, and there can be more
>>>> memory / CPUs / numa nodes. Not saying that it's worse than the
>>>> current O(n^2), I have a test program that borderline hangs the
>>>> system.
>>>
>>> It's definitely not worse than the existing system, which is why I don't
>>> think it's a big deal. Nobody has ever complained about time to register
>>> buffers. It's inherently a slow path, and quite slow at that depending
>>> on the use case. Out of curiosity, I ran some stilly testing on
>>> registering 16GB of memory, with 1..32 threads. Each will do 16GB, so
>>> 512GB registered in total for the 32 case. Before is the current kernel,
>>> after is with per-user xarray accounting:
>>>
>>> before
>>>
>>> nthreads 1: 646 msec
>>> nthreads 2: 888 msec
>>> nthreads 4: 864 msec
>>> nthreads 8: 1450 msec
>>> nthreads 16: 2890 msec
>>> nthreads 32: 4410 msec
>>>
>>> after
>>>
>>> nthreads 1: 650 msec
>>> nthreads 2: 888 msec
>>> nthreads 4: 892 msec
>>> nthreads 8: 1270 msec
>>> nthreads 16: 2430 msec
>>> nthreads 32: 4160 msec
>>>
>>> This includes both registering buffers, cloning all of them to another
>>> ring, and unregistering times, and nowhere is locking scalability an
>>> issue for the xarray manipulation. The box has 32 nodes and 512 CPUs. So
>>> no, I strongly believe this isn't an issue.
>>>
>>> IOW, accurate accounting is cheaper than the stuff we have now. None of
>>> them are super cheap. Does it matter? I really don't think so, or people
>>> would've complained already. The only complaint I got on these kinds of
>>> things was for cloning, which did get fixed up some releases ago.
>>
>> You need compound pages
>>
>> always > /sys/kernel/mm/transparent_hugepage/hugepages-16kB/enabled
>>
>> And use update() instead of register() as accounting dedup for
>> registration is broken-disabled. For the current kernel:
>>
>> Single threaded:
>> 1x1G: 7.5s
>> 2x1G: 45s
>> 4x1G: 190s
>>
>> 16x should be ~3000s, not going to run it. Uninterruptible and no
>> cond_resched, so spawn NR_CPUS threads and the system is completely
>> unresponsive (I guess it depends on the preemption mode).
> The program is below for reference, but it's trivial. THP setting
> is done inside for convenience. There are ways to make the runtime
> even worse, but that should be enough.
Thanks for sending that. Ran it on the same box, on current -git and
with user_struct xarray accounting. Modified it so that 2nd arg is
number of threads, for easy running:
current -git
axboe@...25 ~> cat /sys/kernel/mm/transparent_hugepage/hugepages-16kB/enabled
[always] inherit madvise never
axboe@...25 ~> for i in 1 2 4 8 16; time ./ppage $i $i; end
register 1 GB, num threads 1
________________________________________________________
Executed in 178.91 millis fish external
usr time 9.82 millis 313.00 micros 9.51 millis
sys time 161.83 millis 149.00 micros 161.68 millis
register 2 GB, num threads 2
________________________________________________________
Executed in 638.49 millis fish external
usr time 0.03 secs 285.00 micros 0.03 secs
sys time 1.14 secs 135.00 micros 1.14 secs
register 4 GB, num threads 4
________________________________________________________
Executed in 2.17 secs fish external
usr time 0.05 secs 314.00 micros 0.05 secs
sys time 6.31 secs 150.00 micros 6.31 secs
register 8 GB, num threads 8
________________________________________________________
Executed in 4.97 secs fish external
usr time 0.12 secs 299.00 micros 0.12 secs
sys time 28.97 secs 142.00 micros 28.97 secs
register 16 GB, num threads 16
________________________________________________________
Executed in 10.34 secs fish external
usr time 0.20 secs 294.00 micros 0.20 secs
sys time 126.42 secs 140.00 micros 126.42 secs
-git + user_struct xarray for accounting
axboe@...25 ~> cat /sys/kernel/mm/transparent_hugepage/hugepages-16kB/enabled
[always] inherit madvise never
axboe@...25 ~> for i in 1 2 4 8 16; time ./ppage $i $i; end
register 1 GB, num threads 1
________________________________________________________
Executed in 54.05 millis fish external
usr time 10.66 millis 327.00 micros 10.34 millis
sys time 41.60 millis 259.00 micros 41.34 millis
register 2 GB, num threads 2
________________________________________________________
Executed in 105.70 millis fish external
usr time 34.38 millis 206.00 micros 34.17 millis
sys time 68.55 millis 206.00 micros 68.35 millis
register 4 GB, num threads 4
________________________________________________________
Executed in 214.72 millis fish external
usr time 48.10 millis 193.00 micros 47.91 millis
sys time 182.25 millis 193.00 micros 182.06 millis
register 8 GB, num threads 8
________________________________________________________
Executed in 441.96 millis fish external
usr time 123.26 millis 195.00 micros 123.07 millis
sys time 568.20 millis 195.00 micros 568.00 millis
register 16 GB, num threads 16
________________________________________________________
Executed in 917.70 millis fish external
usr time 0.17 secs 202.00 micros 0.17 secs
sys time 2.48 secs 202.00 micros 2.48 secs
--
Jens Axboe
Powered by blists - more mailing lists