lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <17a24d69-7bf0-412c-a32a-b25d82bb4159@kernel.org>
Date: Mon, 18 Nov 2024 16:11:46 +0100
From: Jesper Dangaard Brouer <hawk@...nel.org>
To: Yunsheng Lin <linyunsheng@...wei.com>,
 Toke Høiland-Jørgensen <toke@...hat.com>,
 davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com
Cc: zhangkun09@...wei.com, fanghaiqing@...wei.com, liuyonglong@...wei.com,
 Robin Murphy <robin.murphy@....com>,
 Alexander Duyck <alexander.duyck@...il.com>, IOMMU <iommu@...ts.linux.dev>,
 Andrew Morton <akpm@...ux-foundation.org>, Eric Dumazet
 <edumazet@...gle.com>, Ilias Apalodimas <ilias.apalodimas@...aro.org>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
 kernel-team <kernel-team@...udflare.com>
Subject: Re: [PATCH net-next v3 3/3] page_pool: fix IOMMU crash when driver
 has already unbound



On 18/11/2024 10.08, Yunsheng Lin wrote:
> On 2024/11/12 22:19, Jesper Dangaard Brouer wrote:
>>>
>>> Yes, there seems to be many MM system internals, like the CONFIG_SPARSEMEM*
>>> config, memory offline/online and other MM specific optimization that it
>>> is hard to tell it is feasible.
>>>
>>> It would be good if MM experts can clarify on this.
>>>
>>
>> Yes, please.  Can Alex Duyck or MM-experts point me at some code walking
>> entire system page table?
>>
>> Then I'll write some kernel code (maybe module) that I can benchmark how
>> long it takes on my machine with 384GiB. I do like Alex'es suggestion,
>> but I want to assess the overhead of doing this on modern hardware.
>>
> 
> After looking more closely into MM subsystem, it seems there is some existing
> pattern or API to walk the entire pages from the buddy allocator subsystem,
> see the kmemleak_scan() in mm/kmemleak.c:
> https://elixir.bootlin.com/linux/v6.12/source/mm/kmemleak.c#L1680
> 
> I used that to walk the pages in a arm64 system with over 300GB memory,
> it took about 1.3 sec to do the walking, which seems acceptable?

Yes, that seems acceptable to me.

I'll also do a test on one of my 384 GB systems.
  - It took approx 0.391661 seconds.

I just deref page->pp_magic and counted the pages, not many page were
in-use (page_count(page) > 0) as machine has just been rebooted into
this kernel:
  - pages=100592572 in-use:2079607

--Jesper

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ