[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74f2ec89-ca40-44a0-8df7-de404063a1a3@kernel.dk>
Date: Sat, 24 Jan 2026 08:14:35 -0700
From: Jens Axboe <axboe@...nel.dk>
To: Pavel Begunkov <asml.silence@...il.com>,
Yuhao Jiang <danisjiang@...il.com>
Cc: io-uring@...r.kernel.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org
Subject: Re: [PATCH v2] io_uring/rsrc: fix RLIMIT_MEMLOCK bypass by removing
cross-buffer accounting
On 1/24/26 4:04 AM, Pavel Begunkov wrote:
> On 1/23/26 16:52, Jens Axboe wrote:
>> On 1/23/26 8:04 AM, Jens Axboe wrote:
>>> On 1/23/26 7:50 AM, Jens Axboe wrote:
>>>> On 1/23/26 7:26 AM, Pavel Begunkov wrote:
>>>>> On 1/22/26 21:51, Pavel Begunkov wrote:
>>>>> ...
>>>>>>>>> I already briefly touched on that earlier, for sure not going to be of
>>>>>>>>> any practical concern.
>>>>>>>>
>>>>>>>> Modest 16 GB can give 1M entries. Assuming 50ns-100ns per entry for the
>>>>>>>> xarray business, that's 50-100ms. It's all serialised, so multiply by
>>>>>>>> the number of CPUs/threads, e.g. 10-100, that's 0.5-10s. Account sky
>>>>>>>> high spinlock contention, and it jumps again, and there can be more
>>>>>>>> memory / CPUs / numa nodes. Not saying that it's worse than the
>>>>>>>> current O(n^2), I have a test program that borderline hangs the
>>>>>>>> system.
> ...
>>> Should've tried 32x32 as well, that ends up going deep into "this sucks"
>>> territory:
>>>
>>> git
>>>
>>> good luck
>
> FWIW, current scales perfectly with CPUs, so just 1 thread
> should be enough for testing.
>
>>> git + user_struct
>>>
>>> axboe@...25 ~> time ./ppage 32 32
>>> register 32 GB, num threads 32
>>>
>>> ________________________________________________________
>>> Executed in 16.34 secs fish external
>
> That's as precise to the calculations above as it could be, it
> was 100x16GB but that should only be differ by the factor of ~1.5.
> Without anchoring to this particular number, the problem is that
> the wall clock runtime for the accounting will linearly depend on
> the number of threads, so this 16 sec is what seemed concerning.
>
>>> usr time 0.54 secs 497.00 micros 0.54 secs
>>> sys time 451.94 secs 55.00 micros 451.94 secs
>>
> ...
>> and the crazier cases:
>
> I don't think it's even crazy, thinking of databases with lots
> of caches where it wants to read to / write from. 100GB+
> shouldn't be surprising.
I mean crazier in terms of runtime, not use case. 32G is peanuts in
terms of memory these days.
>> axboe@...25 ~> time ./ppage 32 32
>> register 32 GB, num threads 32
>>
>> ________________________________________________________
>> Executed in 2.81 secs fish external
>> usr time 0.71 secs 497.00 micros 0.71 secs
>> sys time 19.57 secs 183.00 micros 19.57 secs
>>
>> which isn't insane. Obviously also needs conditional rescheduling in the
>> page loops, as those can take a loooong time for large amounts of
>> memory.
>
> 2.8 sec sounds like a lot as well, makes me wonder which part of
> that is mm, but it mm should scale fine-ish. Surely there will be
> contention on page refcounts but at least the table walk is
> lockless in the best case scenario and otherwise seems to be read
> protected by an rw lock.
Well a lot of that is also just faulting in the memory on clear, test
case should probably be modified to do its own timing. And iterating
page arrays is a huge part of it too. There's no real contention in that
2.8 seconds.
--
Jens Axboe
Powered by blists - more mailing lists