lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZBsAG5cpOFhFZZG6@pc636>
Date:   Wed, 22 Mar 2023 14:18:19 +0100
From:   Uladzislau Rezki <urezki@...il.com>
To:     Dave Chinner <david@...morbit.com>
Cc:     Lorenzo Stoakes <lstoakes@...il.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Baoquan He <bhe@...hat.com>,
        Uladzislau Rezki <urezki@...il.com>,
        Matthew Wilcox <willy@...radead.org>,
        David Hildenbrand <david@...hat.com>,
        Liu Shixin <liushixin2@...wei.com>,
        Jiri Olsa <jolsa@...nel.org>
Subject: Re: [PATCH v2 2/4] mm: vmalloc: use rwsem, mutex for vmap_area_lock
 and vmap_block->lock

Hello, Dave.

> 
> I'm travelling right now, but give me a few days and I'll test this
> against the XFS workloads that hammer the global vmalloc spin lock
> really, really badly. XFS can use vm_map_ram and vmalloc really
> heavily for metadata buffers and hit the global spin lock from every
> CPU in the system at the same time (i.e. highly concurrent
> workloads). vmalloc is also heavily used in the hottest path
> throught the journal where we process and calculate delta changes to
> several million items every second, again spread across every CPU in
> the system at the same time.
> 
> We really need the global spinlock to go away completely, but in the
> mean time a shared read lock should help a little bit....
> 
Could you please share some steps how to run your workloads in order to
touch vmalloc() code. I would like to have a look at it in more detail
just for understanding the workloads.

Meanwhile my grep agains xfs shows:

<snip>
urezki@...38:~/data/raid0/coding/linux-rcu.git/fs/xfs$ grep -rn vmalloc ./
./xfs_log_priv.h:675: * Log vector and shadow buffers can be large, so we need to use kvmalloc() here
./xfs_log_priv.h:676: * to ensure success. Unfortunately, kvmalloc() only allows GFP_KERNEL contexts
./xfs_log_priv.h:677: * to fall back to vmalloc, so we can't actually do anything useful with gfp
./xfs_log_priv.h:678: * flags to control the kmalloc() behaviour within kvmalloc(). Hence kmalloc()
./xfs_log_priv.h:681: * vmalloc if it can't get somethign straight away from the free lists or
./xfs_log_priv.h:682: * buddy allocator. Hence we have to open code kvmalloc outselves here.
./xfs_log_priv.h:686: * allocations. This is actually the only way to make vmalloc() do GFP_NOFS
./xfs_log_priv.h:691:xlog_kvmalloc(
./xfs_log_priv.h:702:                   p = vmalloc(buf_size);
./xfs_bio_io.c:21:      unsigned int            is_vmalloc = is_vmalloc_addr(data);
./xfs_bio_io.c:26:      if (is_vmalloc && op == REQ_OP_WRITE)
./xfs_bio_io.c:56:      if (is_vmalloc && op == REQ_OP_READ)
./xfs_log.c:1976:       if (is_vmalloc_addr(iclog->ic_data))
./xfs_log_cil.c:338:                    lv = xlog_kvmalloc(buf_size);
./libxfs/xfs_attr_leaf.c:522:           args->value = kvmalloc(valuelen, GFP_KERNEL | __GFP_NOLOCKDEP);
./kmem.h:12:#include <linux/vmalloc.h>
./kmem.h:78:    if (is_vmalloc_addr(addr))
./kmem.h:79:            return vmalloc_to_page(addr);
./xfs_attr_item.c:84:    * This could be over 64kB in length, so we have to use kvmalloc() for
./xfs_attr_item.c:85:    * this. But kvmalloc() utterly sucks, so we use our own version.
./xfs_attr_item.c:87:   nv = xlog_kvmalloc(sizeof(struct xfs_attri_log_nameval) +
./scrub/attr.c:60:      ab = kvmalloc(sizeof(*ab) + sz, flags);
urezki@...38:~/data/raid0/coding/linux-rcu.git/fs/xfs$
<snip>

Thanks!

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ