lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <855ad6ee-dba4-9729-78bd-23e392905cf6@linux.alibaba.com>
Date:   Fri, 11 Sep 2020 10:50:56 +0800
From:   Alex Shi <alex.shi@...ux.alibaba.com>
To:     Hugh Dickins <hughd@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        mgorman@...hsingularity.net, tj@...nel.org,
        khlebnikov@...dex-team.ru, daniel.m.jordan@...cle.com,
        willy@...radead.org, hannes@...xchg.org, lkp@...el.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        cgroups@...r.kernel.org, shakeelb@...gle.com,
        iamjoonsoo.kim@....com, richard.weiyang@...il.com,
        kirill@...temov.name, alexander.duyck@...il.com,
        rong.a.chen@...el.com, mhocko@...e.com, vdavydov.dev@...il.com,
        shy828301@...il.com, vbabka@...e.cz, minchan@...nel.org, cai@....pw
Subject: Re: [PATCH v18 00/32] per memcg lru_lock: reviews



在 2020/9/10 上午7:16, Hugh Dickins 写道:
> On Wed, 9 Sep 2020, Alex Shi wrote:
>> 在 2020/9/9 上午7:41, Hugh Dickins 写道:
>>>
>>> [PATCH v18 05/32] mm/thp: remove code path which never got into
>>> This is a good simplification, but I see no sign that you understand
>>> why it's valid: it relies on lru_add_page_tail() being called while
>>> head refcount is frozen to 0: we would not get this far if someone
>>> else holds a reference to the THP - which they must hold if they have
>>> isolated the page from its lru (and that's true before or after your
>>> per-memcg changes - but even truer after those changes, since PageLRU
>>> can then be flipped without lru_lock at any instant): please explain
>>> something of this in the commit message.
>>
>> Is the following commit log better?
>>
>>     split_huge_page() will never call on a page which isn't on lru list, so
>>     this code never got a chance to run, and should not be run, to add tail
>>     pages on a lru list which head page isn't there.
>>
>>     Hugh Dickins' mentioned:
>>     The path should never be called since lru_add_page_tail() being called
>>     while head refcount is frozen to 0: we would not get this far if someone
>>     else holds a reference to the THP - which they must hold if they have
>>     isolated the page from its lru.
>>
>>     Although the bug was never triggered, it'better be removed for code
>>     correctness, and add a warn for unexpected calling.
> 
> Not much better, no.  split_huge_page() can easily be called for a page
> which is not on the lru list at the time, 

Hi Hugh,

Thanks for comments!

There are some discussion on this point a couple of weeks ago,
https://lkml.org/lkml/2020/7/9/760

Matthew Wilcox and Kirill have the following comments,
> I don't understand how we get to split_huge_page() with a page that's
> not on an LRU list.  Both anonymous and page cache pages should be on
> an LRU list.  What am I missing?

Right, and it's never got removed from LRU during the split. The tail
pages have to be added to LRU because they now separate from the tail
page.

-- 
 Kirill A. Shutemov

> and I don't know what was the
> bug which was never triggered.  

So the only path to the removed part should be a bug, like  sth here,
https://lkml.org/lkml/2020/7/10/118
or
https://lkml.org/lkml/2020/7/10/972

> Stick with whatever text you end up with
> for the combination of 05/32 and 18/32, and I'll rewrite it after.

I am not object to merge them into one, I just don't know how to say
clear about 2 patches in commit log. As patch 18, TestClearPageLRU
add the incorrect posibility of remove lru bit during split, that's
the reason of code path rewrite and a WARN there.

Thanks
Alex
> 
>>> [PATCH v18 06/32] mm/thp: narrow lru locking
>>> Why? What part does this play in the series? "narrow lru locking" can
>>> also be described as "widen page cache locking": 
>>
>> Uh, the page cache locking isn't widen, it's still on the old place.
> 
> I'm not sure if you're joking there. Perhaps just a misunderstanding.
> 
> Yes, patch 06/32 does not touch the xa_lock(&mapping->i_pages) and
> xa_lock(&swap_cache->i_pages) lines (odd how we've arrived at two of
> those, but please do not get into cleaning it up now); but it removes
> the spin_lock_irqsave(&pgdata->lru_lock, flags) which used to come
> before them, and inserts a spin_lock(&pgdat->lru_lock) after them.
> 
> You call that narrowing the lru locking, okay, but I see it as also
> pushing the page cache locking outwards: before this patch, page cache
> lock was taken inside lru_lock; after this patch, page cache lock is
> taken outside lru_lock.  If you cannot see that, then I think you
> should not have touched this code at all; but it's what we have
> been testing, and I think we should go forward with it.
> 
>>> But I wish you could give some reason for it in the commit message!
>>
>> It's a head scratch task. Would you like to tell me what's detailed info 
>> should be there? Thanks!
> 
> So, you don't know why you did it either: then it will be hard to
> justify.  I guess I'll have to write something for it later.  I'm
> strongly tempted just to drop the patch, but expect it will become
> useful later, for using lock_page_memcg() before getting lru_lock.
> 

I thought the xa_lock and lru_lock relationship was described clear
in the commit log, and still no idea of the move_lock in the chain.
Please refill them for what I overlooked.
Thanks!

>>> Signed-off-by: Wei Yang <richard.weiyang@...il.com>
>>> Is that correct? Or Wei Yang suggested some part of it perhaps?
>>
>> Yes, we talked a lot to confirm the locking change is safe.
> 
> Okay, but the patch was written by you, and sent by you to Andrew:
> that is not a case for "Signed-off-by: Someone Else".
> 

Ok. let's remove his signed-off.

>>> [PATCH v18 27/32] mm/swap.c: optimizing __pagevec_lru_add lru_lock
>>> Could we please drop this one for the moment? And come back to it later
>>> when the basic series is safely in.  It's a good idea to try sorting
>>> together those pages which come under the same lock (though my guess is
>>> that they naturally gather themselves together quite well already); but
>>> I'm not happy adding 360 bytes to the kernel stack here (and that in
>>> addition to 192 bytes of horrid pseudo-vma in the shmem swapin case),
>>> though that could be avoided by making it per-cpu. But I hope there's
>>> a simpler way of doing it, as efficient, but also useful for the other
>>> pagevec operations here: perhaps scanning the pagevec for same page->
>>> mem_cgroup (and flags node bits), NULLing entries as they are done.
>>> Another, easily fixed, minor defect in this patch: if I'm reading it
>>> right, it reverses the order in which the pages are put on the lru?
>>
>> this patch could give about 10+% performance gain on my multiple memcg
>> readtwice testing. fairness locking cost the performance much.
> 
> Good to know, should have been mentioned.  s/fairness/Repeated/
> 
> But what was the gain or loss on your multiple memcg readtwice
> testing without this patch, compared against node-only lru_lock?
> The 80% gain mentioned before, I presume.  So this further
> optimization can wait until the rest is solid.

the gain based on the patch 26.

> 
>>
>> I also tried per cpu solution but that cause much trouble of per cpu func
>> things, and looks no benefit except a bit struct size of stack, so if 
>> stack size still fine. May we could use the solution and improve it better.
>> like, functionlize, fix the reverse issue etc.
> 
> I don't know how important the stack depth consideration is nowadays:
> I still care, maybe others don't, since VMAP_STACK became an option.
> 
> Yes, please fix the reversal (if I was right on that); and I expect
> you could use a singly linked list instead of the double.

single linked list is more saving, but do we have to reverse walking to seek
the head or tail for correct sequence?

> 
> But I'll look for an alternative - later, once the urgent stuff
> is completed - and leave the acks on this patch to others.

Ok, looking forward for your new solution!

Thanks
Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ