lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 28 Jun 2023 11:50:36 -0700 (PDT)
From:   "Lameter, Christopher" <cl@...amperecomputing.com>
To:     Roman Gushchin <roman.gushchin@...ux.dev>
cc:     David Rientjes <rientjes@...gle.com>,
        Julian Pidancet <julian.pidancet@...cle.com>,
        Christoph Lameter <cl@...ux.com>,
        Pekka Enberg <penberg@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
        Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org, Matthew Wilcox <willy@...radead.org>,
        Kees Cook <keescook@...omium.org>,
        Rafael Aquini <aquini@...hat.com>
Subject: Re: [PATCH] mm/slub: disable slab merging in the default
 configuration

On Wed, 28 Jun 2023, Roman Gushchin wrote:

> But I wonder if we need a new flag (SLAB_MERGE?) to explicitly force merging
> on per-slab cache basis. I believe there are some cases when slab caches can
> be created in noticeable numbers and in those cases the memory footprint might
> be noticeable.

One of the uses for merging are the kmalloc like slab cache arrays 
created by various subsystem for their allocations. This is particularly 
useful for small frequently used caches that seem to have similar 
configurations. See slabinfo output below.

Also you are doing the tests on a 4k page systems. We prefer 64k page 
sized systems here where the waste due to duplication of structures is 
higher. Also the move on x86 is to go to higher page sizes (see the 
folio approach by Matthew Wilcox) and this approach would increase the 
memory footprint if large folio sizes are used.

Moreover our system here is sensitive to cpu cache pressure given our high 
core count which will  be caused by the increased cache footprint due to 
not merging caches if this patch is accepted.


Here are the aliases on my Ampere Altra system:

root@...08sys-r113:/home/cl/linux/tools/mm# ./slabinfo -a

:0000024     <- audit_buffer lsm_file_cache
:0000032     <- sd_ext_cdb ext4_io_end_vec fsnotify_mark_connector lsm_inode_cache xfs_refc_intent numa_policy
:0000040     <- xfs_extfree_intent ext4_system_zone
:0000048     <- Acpi-Namespace shared_policy_node xfs_log_ticket xfs_ifork ext4_bio_post_read_ctx ksm_mm_slot
:0000056     <- ftrace_event_field Acpi-Parse xfs_defer_pending file_lock_ctx
:0000064     <- fanotify_path_event ksm_stable_node xfs_rmap_intent jbd2_inode ksm_rmap_item io dmaengine-unmap-2 zswap_entry xfs_bmap_intent iommu_iova vmap_area
:0000072     <- fanotify_perm_event fanotify_fid_event Acpi-Operand
:0000080     <- Acpi-ParseExt Acpi-State audit_tree_mark
:0000088     <- xfs_attr_intent trace_event_file configfs_dir_cache kernfs_iattrs_cache blkdev_ioc
:0000128     <- kernfs_node_cache btree_node
:0000176     <- xfs_iul_item xfs_attrd_item xfs_cud_item xfs_bud_item xfs_rud_item
:0000184     <- xfs_icr ip6-frags
:0000192     <- ip6_mrt_cache ip_dst_cache aio_kiocb uid_cache inet_peer_cache bio_integrity_payload ip_mrt_cache dmaengine-unmap-16 skbuff_ext_cache
:0000200     <- xfs_inobt_cur xfs_refcbt_cur ip4-frags

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ