lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <52d414b85f41a76fc0a7b0082cba95297d9e5874.camel@linux.intel.com>
Date: Fri, 16 Feb 2024 08:57:07 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Chris Li <chrisl@...nel.org>, linux-kernel@...r.kernel.org, 
 linux-mm@...ck.org, Wei Xu <weixugc@...gle.com>, Yu Zhao
 <yuzhao@...gle.com>,  Greg Thelen <gthelen@...gle.com>, Chun-Tse Shao
 <ctshao@...gle.com>, Yosry Ahmed <yosryahmed@...gle.com>,  Michal Hocko
 <mhocko@...e.com>, Mel Gorman <mgorman@...hsingularity.net>, Huang Ying
 <ying.huang@...el.com>,  Nhat Pham <nphamcs@...il.com>, Kairui Song
 <kasong@...cent.com>, Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH v4] mm: swap: async free swap slot cache entries

On Thu, 2024-02-15 at 20:16 -0800, Andrew Morton wrote:
> On Thu, 15 Feb 2024 17:38:38 -0800 Tim Chen <tim.c.chen@...ux.intel.com> wrote:
> 
> > > What this description lacks is any description of why anyone cares. 
> > > 
> > > The patch clearly decreases overall throughput (speed-vs-latency is a
> > > common tradeoff).
> 
> This, please.
> 
> > > And the "we don't know how to fix this properly so punt it into a
> > > kernel thread" approach remains lame.  For example, the risk that the
> > > now-liberated allocator can outpace the async freeing, resulting in
> > > unlimited object windup.
> > 
> > 
> > Andrew,
> > 
> > What you are saying about outpacing asyn free is true for v1 and v2 versions of the patch.
> > 
> > But in this latest version, if another reclaim comes in before the async free has kicked in,
> > we would be freeing the whole cache directly, same as original code, without waiting
> > for the async free.  It is different from the first version
> > where you go into the free one at a time mode while waiting for the async free. 
> > That was also my objection to the first two versions as you could be in this
> > slow free one at a time mode for a long time.
> > 
> > So now we should not have unlimited object windup.  And we would be doing free
> > in batch of 64, either still in the direct path or in the async path.
> > 
> 
> OK, thanks, I didn't read closely enough,
> 
> > If the next swap fault comes in very fast, before the async
> > free gets a chance to run. It will directly free all the swap
> > cache in the swap fault the same way as previously.
> 
> And might it be a win to cancel the async_work in this case?
> 
Canceling async_work will matter for the case where we push swap hard,
and have a better chance of finding async have not yet engaged when
we need to free additional swap slots.

Chris' tests so far has been for his use cases where swap is lightly
loaded.  The scenarios you listed are when 
we push swap hard close to its max throughput. 

It would help answer your concerns if Chris could also test high swap scenario.
Then we can make sure sustainable swap throughput does not regress and
latency is improved. And check whether it is beneficial to cancel outstanding async_work on
direct free path. I think the pro of canceling the asyn_work is to
skip an extra lock acquisition on the cache. Though
there is also some overhead in canceling the work itself.

Tim

> 
> Again, without a clear description of the userspace-visible effects of
> this problem I am groping in the dark.  My hands blindly landed upon
> the question: the overall effect here is to leave worst-case latency
> unaltered, but to decrease average latency.  Does this satisfy the
> yet-to-be-described requirements?
> 
> 
> Also, the V4 patch's quoted quantitative testing results are pasted
> from the V2 patch's.  V2 was a fundamentally different implementation. 
> I think it is fair to say that V4 is "untested", with regard to
> satisfying its runtime objectives.
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ