lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Jan 2020 20:45:25 +0800
From:   Alex Shi <alex.shi@...ux.alibaba.com>
To:     Hugh Dickins <hughd@...gle.com>
Cc:     hannes@...xchg.org, Andrew Morton <akpm@...ux-foundation.org>,
        cgroups@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, mgorman@...hsingularity.net, tj@...nel.org,
        khlebnikov@...dex-team.ru, daniel.m.jordan@...cle.com,
        yang.shi@...ux.alibaba.com, willy@...radead.org,
        shakeelb@...gle.com
Subject: Re: [PATCH v7 00/10] per lruvec lru_lock for memcg



在 2020/1/13 下午4:48, Hugh Dickins 写道:
> On Fri, 10 Jan 2020, Alex Shi wrote:
>> 在 2020/1/2 下午6:21, Alex Shi 写道:
>>> 在 2020/1/1 上午7:05, Andrew Morton 写道:
>>>> On Wed, 25 Dec 2019 17:04:16 +0800 Alex Shi <alex.shi@...ux.alibaba.com> wrote:
>>>>
>>>>> This patchset move lru_lock into lruvec, give a lru_lock for each of
>>>>> lruvec, thus bring a lru_lock for each of memcg per node.
>>>>
>>>> I see that there has been plenty of feedback on previous versions, but
>>>> no acked/reviewed tags as yet.
>>>>
>>>> I think I'll take a pass for now, see what the audience feedback looks
>>>> like ;)
>>>>
>>>
>>
>> Hi Johannes,
>>
>> Any comments of this version? :)
> 
> I (Hugh) tried to test it on v5.5-rc5, but did not get very far at all -
> perhaps because my particular interest tends towards tmpfs and swap,
> and swap always made trouble for lruvec lock - one of the reasons why
> our patches were more complicated than you thought necessary.
> 
> Booted a smallish kernel in mem=700M with 1.5G of swap, with intention
> of running small kernel builds in tmpfs and in ext4-on-loop-on-tmpfs
> (losetup was the last command started but I doubt it played much part):
> 
> mount -t tmpfs -o size=470M tmpfs /tst
> cp /dev/zero /tst
> losetup /dev/loop0 /tst/zero

Hi Hugh,

Many thanks for the testing!

I am trying to reproduce your testing, do above 3 steps, then build kernel with 'make -j 8' on my qemu. but cannot reproduce the problem with this v7 version or with v8 version, https://github.com/alexshi/linux/tree/lru-next, which fixed the bug KK mentioned, like the following. 
my qemu vmm like this:

[root@...ug010000002015 ~]# mount -t tmpfs -o size=470M tmpfs /tst
[root@...ug010000002015 ~]# cp /dev/zero /tst
cp: error writing ‘/tst/zero’: No space left on device
cp: failed to extend ‘/tst/zero’: No space left on device
[root@...ug010000002015 ~]# losetup /dev/loop0 /tst/zero
[root@...ug010000002015 ~]# cat /proc/cmdline
earlyprintk=ttyS0 root=/dev/sda1 console=ttyS0 debug crashkernel=128M printk.devkmsg=on

my kernel configed with MEMCG/MEMCG_SWAP with xfs rootimage, and compiling kernel under ext4. Could you like to share your kernel config and detailed reproduce steps with me? And would you like to try my new version from above github link in your convenient?

Thanks a lot!
Alex 

 static void commit_charge(struct page *page, struct mem_cgroup *memcg,
                          bool lrucare)
 {
-       int isolated;
+       struct lruvec *lruvec = NULL;

        VM_BUG_ON_PAGE(page->mem_cgroup, page);

@@ -2612,8 +2617,16 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
         * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page
         * may already be on some other mem_cgroup's LRU.  Take care of it.
         */
-       if (lrucare)
-               lock_page_lru(page, &isolated);
+       if (lrucare) {
+               lruvec = lock_page_lruvec_irq(page);
+               if (likely(PageLRU(page))) {
+                       ClearPageLRU(page);
+                       del_page_from_lru_list(page, lruvec, page_lru(page));
+               } else {
+                       unlock_page_lruvec_irq(lruvec);
+                       lruvec = NULL;
+               }
+       }

        /*
         * Nobody should be changing or seriously looking at
@@ -2631,8 +2644,15 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
         */
        page->mem_cgroup = memcg;

-       if (lrucare)
-               unlock_page_lru(page, isolated);
+       if (lrucare && lruvec) {
+               unlock_page_lruvec_irq(lruvec);
+               lruvec = lock_page_lruvec_irq(page);
+
+               VM_BUG_ON_PAGE(PageLRU(page), page);
+               SetPageLRU(page);
+               add_page_to_lru_list(page, lruvec, page_lru(page));
+               unlock_page_lruvec_irq(lruvec);
+       }
 }
> 
> and kernel crashed on the
> 
> VM_BUG_ON_PAGE(lruvec_memcg(lruvec) != page->mem_cgroup, page);
> kernel BUG at mm/memcontrol.c:1268!
> lock_page_lruvec_irqsave
> relock_page_lruvec_irqsave
> pagevec_lru_move_fn
> __pagevec_lru_add
> lru_add_drain_cpu
> lru_add_drain
> swap_cluster_readahead
> shmem_swapin
> shmem_swapin_page
> shmem_getpage_gfp
> shmem_getpage
> shmem_write_begin
> generic_perform_write
> __generic_file_write_iter
> generic_file_write_iter
> new_sync_write
> __vfs_write
> vfs_write
> ksys_write
> __x86_sys_write
> do_syscall_64
> 
> Hugh
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ