[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <rbco2k74plqedtqvn6ebu6wwssy5urw5mjvsk6n576d3urbjnx@tq43anmdvq35>
Date: Thu, 13 Feb 2025 11:18:17 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>, Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Kairui Song <ryncsn@...il.com>
Subject: Re: [PATCHv4 14/17] zsmalloc: make zspage lock preemptible
On (25/02/12 15:35), Yosry Ahmed wrote:
> > Difference at 95.0% confidence
> > -1.03219e+08 +/- 55308.7
> > -27.9705% +/- 0.0149878%
> > (Student's t, pooled s = 58864.4)
>
> Thanks for sharing these results, but I wonder if this will capture
> regressions from locking changes (e.g. a lock being preemtible)? IIUC
> this is counting the instructions executed in these paths, and that
> won't change if the task gets preempted. Lock contention may be captured
> as extra instructions, but I am not sure we'll directly see its effect
> in terms of serialization and delays.
Yeah..
> I think we also need some high level testing (e.g. concurrent
> swapins/swapouts) to find that out. I think that's what Kairui's testing
> covers.
I do a fair amount of high-level testing: heavy parallel (make -j36 and
parallel dd) workloads (multiple zram devices configuration - zram0 ext4,
zram1 writeback device, zram2 swap) w/ and w/o lockdep. In addition I also
run these workloads under heavy memory pressure (a 4GB VM), when oom-killer
starts to run around with a pair of scissors. But it's mostly regression
testing.
Powered by blists - more mailing lists