[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4307e984-a593-4495-b4cc-8ef509ddda03@amd.com>
Date: Wed, 10 Jul 2024 17:33:47 +0530
From: Bharata B Rao <bharata@....com>
To: Yu Zhao <yuzhao@...gle.com>, mjguzik@...il.com, david@...morbit.com,
kent.overstreet@...ux.dev
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, nikunj@....com,
"Upadhyay, Neeraj" <Neeraj.Upadhyay@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>, willy@...radead.org, vbabka@...e.cz,
kinseyho@...gle.com, Mel Gorman <mgorman@...e.de>,
linux-fsdevel@...r.kernel.org
Subject: Re: Hard and soft lockups with FIO and LTP runs on a large system
On 07-Jul-24 4:12 AM, Yu Zhao wrote:
>> Some experiments tried
>> ======================
>> 1) When MGLRU was enabled many soft lockups were observed, no hard
>> lockups were seen for 48 hours run. Below is once such soft lockup.
<snip>
>> Below preemptirqsoff trace points to preemption being disabled for more
>> than 10s and the lock in picture is lruvec spinlock.
>
> Also if you could try the other patch (mglru.patch) please. It should
> help reduce unnecessary rotations from deactivate_file_folio(), which
> in turn should reduce the contention on the LRU lock for MGLRU.
Thanks. With mglru.patch on a MGLRU-enabled system, the below latency
trace record is no longer seen for a 30hr workload run.
>
>> # tracer: preemptirqsoff
>> #
>> # preemptirqsoff latency trace v1.1.5 on 6.10.0-rc3-mglru-irqstrc
>> # --------------------------------------------------------------------
>> # latency: 10382682 us, #4/4, CPU#128 | (M:desktop VP:0, KP:0, SP:0
>> HP:0 #P:512)
>> # -----------------
>> # | task: fio-2701523 (uid:0 nice:0 policy:0 rt_prio:0)
>> # -----------------
>> # => started at: deactivate_file_folio
>> # => ended at: deactivate_file_folio
>> #
>> #
>> # _------=> CPU#
>> # / _-----=> irqs-off/BH-disabled
>> # | / _----=> need-resched
>> # || / _---=> hardirq/softirq
>> # ||| / _--=> preempt-depth
>> # |||| / _-=> migrate-disable
>> # ||||| / delay
>> # cmd pid |||||| time | caller
>> # \ / |||||| \ | /
>> fio-2701523 128...1. 0us$: deactivate_file_folio
>> <-deactivate_file_folio
>> fio-2701523 128.N.1. 10382681us : deactivate_file_folio
>> <-deactivate_file_folio
>> fio-2701523 128.N.1. 10382683us : tracer_preempt_on
>> <-deactivate_file_folio
>> fio-2701523 128.N.1. 10382691us : <stack trace>
>> => deactivate_file_folio
>> => mapping_try_invalidate
>> => invalidate_mapping_pages
>> => invalidate_bdev
>> => blkdev_common_ioctl
>> => blkdev_ioctl
>> => __x64_sys_ioctl
>> => x64_sys_call
>> => do_syscall_64
>> => entry_SYSCALL_64_after_hwframe
However the contention now has shifted to inode_hash_lock. Around 55
softlockups in ilookup() were observed:
# tracer: preemptirqsoff
#
# preemptirqsoff latency trace v1.1.5 on 6.10.0-rc3-trnmglru
# --------------------------------------------------------------------
# latency: 10620430 us, #4/4, CPU#260 | (M:desktop VP:0, KP:0, SP:0 HP:0
#P:512)
# -----------------
# | task: fio-3244715 (uid:0 nice:0 policy:0 rt_prio:0)
# -----------------
# => started at: ilookup
# => ended at: ilookup
#
#
# _------=> CPU#
# / _-----=> irqs-off/BH-disabled
# | / _----=> need-resched
# || / _---=> hardirq/softirq
# ||| / _--=> preempt-depth
# |||| / _-=> migrate-disable
# ||||| / delay
# cmd pid |||||| time | caller
# \ / |||||| \ | /
fio-3244715 260...1. 0us$: _raw_spin_lock <-ilookup
fio-3244715 260.N.1. 10620429us : _raw_spin_unlock <-ilookup
fio-3244715 260.N.1. 10620430us : tracer_preempt_on <-ilookup
fio-3244715 260.N.1. 10620440us : <stack trace>
=> _raw_spin_unlock
=> ilookup
=> blkdev_get_no_open
=> blkdev_open
=> do_dentry_open
=> vfs_open
=> path_openat
=> do_filp_open
=> do_sys_openat2
=> __x64_sys_openat
=> x64_sys_call
=> do_syscall_64
=> entry_SYSCALL_64_after_hwframe
It appears that scalability issues with inode_hash_lock has been brought
up multiple times in the past and there were patches to address the same.
https://lore.kernel.org/all/20231206060629.2827226-9-david@fromorbit.com/
https://lore.kernel.org/lkml/20240611173824.535995-2-mjguzik@gmail.com/
CC'ing FS folks/list for awareness/comments.
Regards,
Bharata.
Powered by blists - more mailing lists