lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7bf1ee37.b6a4.196b0b6dce1.Coremail.00107082@163.com>
Date: Fri, 9 May 2025 00:24:56 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Kent Overstreet" <kent.overstreet@...ux.dev>
Cc: "Suren Baghdasaryan" <surenb@...gle.com>, akpm@...ux-foundation.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] alloc_tag: avoid mem alloc and iter reset when reading
 allocinfo

At 2025-05-08 21:33:50, "Kent Overstreet" <kent.overstreet@...ux.dev> wrote:
>On Thu, May 08, 2025 at 01:51:48PM +0800, David Wang wrote:
>> At 2025-05-08 12:07:40, "Kent Overstreet" <kent.overstreet@...ux.dev> wrote:
>> >Another thing to note is that memory layout - avoiding pointer chasing -
>> >is hugely important, but it'll almost never show up as allocator calls.
>> >
>> >To give you some examples, mempools and biosets used to be separately
>> >allocated. This was mainly to make error paths in outer object
>> >constructors/destructors easier and safer: instead of keeping track of
>> >what's initialized and what's not, if you've got a pointer to a
>> >mempool/bioset you call *_free() on it.
>> >
>> >(People hadn't yet clued that you can just kzalloc() the entire outer
>> >object, and then if the inner object is zeroed it wasn't initialized).
>> >
>> >But that means you're adding a pointer chase to every mempool_alloc()
>> >call, and since bioset itself has mempools allocating bios had _two_
>> >unnecessary pointer derefs. That's death for performance when you're
>> >running cache cold, but since everyone benchmarks cache-hot...
>> >
>> >(I was the one who fixed that).
>> >
>> >Another big one was generic_file_buffered_read(). Main buffered read
>> >path, everyone wants it to be as fast as possible.
>> >
>> >But the core is (was) a loop that walks the pagecache radix tree to get
>> >the page, then copies 4k of data out to userspace (there goes l1), then
>> >repeats all that pointer chasing for the next 4k. Pre large folios, it
>> >was horrific.
>> >
>> >Solution - vectorize it. Look up all the pages we're copying from all at
>> >once, stuff them in a (dynamically allocated! for each read!) vector,
>> >and then do the copying out to userspace all at once. Massive
>> >performance gain.
>> >
>> >Of course, to do that I first had to clean up a tangled 250+ line
>> >monstrosity of half baked, poorly thought out "optimizations" (the worst
>> >spaghetti of gotos you'd ever seen) and turn it into something
>> >manageable...
>> >
>> >So - keep things simple, don't overthink the little stuff, so you can
>> >spot and tackle the big algorithmic wins :)
>> I will keep this in mind~! :)
>> 
>> And thanks for the enlightening notes~!! 
>> 
>> Though I could not quite catch up with the first one,  I think I got
>> the point: avoid unnecessary pointer chasing and  keep the pointer
>> chasing as short(balanced) as possible~ 
>
>To illustrate - DRAM latency is 30-70n.
>
>At 4GHz, that's 120-280 cycles, and a properly fed CPU can do multiple
>instructions per clock - so a cache miss all the way to DRAM can cost
>you hundreds of instructions.

Oh, I understand cache miss is bad, it is the "mempools and biosets" I
that I have hard time to connect dots with, due to lack of knowledge.....


>
>> The second one, about copy 4k by 4k, seems  quite similar to seq_file,
>> at least the "4k" part, literally. seq_file read()  defaults to alloc
>> 4k buffer, and read data until EOF or the 4k buffer is full,   and
>> start over again for the next read().   
>>
>> One solution could be make changes to seq_file, do not stop until user
>> buffer is full for each read. kind of similar to your second note, in
>> a sequential style,  I think.
>>
>> If  user read with 128K buffer,  and seq_file fill the buffer 4k by
>> 4k, it would only need ~3 read calls for allocinfo. (I did post a
>> patch for seq_file to fill user buffer, but start/stop still happens
>> at  4k boundary , so no help for 
>> the iterator rewinding when read /proc/allocinfo yet.
>> https://lore.kernel.org/lkml/20241220140819.9887-1-00107082@163.com/ )
>> The solution in this patch is keeping the iterator alive and valid
>> cross read boundary, this can  also avoid the cost for each start
>> over.
>
>The first question is - does it matter? If the optimization is just for
>/proc/allocinfo, who's reading it at a high enough rate that we care?
>
>If it's only being used interactively, it doesn't matter. If it's being
>read at a high rate by some sort of profiling program, we'd want to skip
>the text interface entirely and add an ioctl to read the data out in a
>binary format.
...^_^, Actually, I have been running tools parsing /proc/allocinfo every 5 seconds
,and feeding data to a prometheus server for a quite long while...
5 seconds seems not that frequent, but I also have all other proc files to read, 
I would like optimization for all the proc files......

Ioctl or other binary interfaces are indeed more efficient, but most are
not well documented, while most proc files are self-documented. If proc files
are efficient enough, I think I would stay with proc files even with a binary
interface alternate tens of fold faster.


>
>The idea of changing seq_file to continue until the user buffer is full
>- that'd be a good one, if you're making changes that benefit all
>seq_file users.
I did make that patch, I think I am still waiting feedback......

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ