lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080821173442.b9234f26.kamezawa.hiroyu@jp.fujitsu.com>
Date:	Thu, 21 Aug 2008 17:34:42 +0900
From:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	"balbir@...ux.vnet.ibm.com" <balbir@...ux.vnet.ibm.com>,
	"yamamoto@...inux.co.jp" <yamamoto@...inux.co.jp>,
	"nishimura@....nes.nec.co.jp" <nishimura@....nes.nec.co.jp>,
	ryov@...inux.co.jp, "linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [RFC][PATCH -mm 0/7] memcg: lockless page_cgroup v1

On Wed, 20 Aug 2008 20:00:06 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com> wrote:
> > Known problem: force_emtpy is broken...so rmdir will struck into nightmare.
> > It's because of patch 2/7.
> > will be fixed in the next version.
> > 
> 
This is a new routine for force_empty. Assumes init_mem_cgroup has no limit.
(lockless page_cgroup is also applied.)

I think this routine is enough generic to be enhanced for hierarchy in future.
I think move_account() routine can be used for other purpose.
(for example, move_task.)


==
int mem_cgroup_move_account(struct page *page, struct page_cgroup *pc,
        struct mem_cgroup *from, struct mem_cgroup *to)
{
        struct mem_cgroup_per_zone *from_mz, *to_mz;
        int nid, zid;
        int ret = 1;

        VM_BUG_ON(to->no_limit == 0);
        VM_BUG_ON(!irqs_disabled());

        nid = page_to_nid(page);
        zid = page_zonenum(page);
        from_mz =  mem_cgroup_zoneinfo(from, nid, zid);
        to_mz =  mem_cgroup_zoneinfo(to, nid, zid);

        if (res_counter_charge(&to->res, PAGE_SIZE)) {
                /* Now, we assume no_limit...no failure here. */
                return ret;
        }

        if (spin_trylock(&to_mz->lru_lock)) {
                __mem_cgroup_remove_list(from_mz, pc);
                css_put(&from->css);
                res_counter_uncharge(&from->res, PAGE_SIZE);
                pc->mem_cgroup = to;
                css_get(&to->css);
                __mem_cgroup_add_list(to_mz, pc);
                ret = 0;
                spin_unlock(&to_mz->lru_lock);
        } else {
                res_counter_uncharge(&to->res, PAGE_SIZE);
        }

        return ret;
}
/*
 * This routine moves all account to root cgroup.
 */
static void mem_cgroup_force_empty_list(struct mem_cgroup *mem,
                            struct mem_cgroup_per_zone *mz,
                            enum lru_list lru)
{
        struct page_cgroup *pc;
        unsigned long flags;
        struct list_head *list;
        int drain = 0;

        list = &mz->lists[lru];

        spin_lock_irqsave(&mz->lru_lock, flags);
        while (!list_empty(list)) {
                pc = list_entry(list->prev, struct page_cgroup, lru);
                if (PcgObsolete(pc)) {
                        list_move(&pc->lru, list);
                        /* This page_cgroup may remain on this list until
                           we drain it. */
                        if (drain++ > MEMCG_LRU_THRESH/2) {
                                spin_unlock_irqrestore(&mz->lru_lock, flags);
                                mem_cgroup_all_force_drain();
                                yield();
                                drain = 0;
                                spin_lock_irqsave(&mz->lru_lock, flags);
                        }
                        continue;
                }
                if (mem_cgroup_move_account(page, pc->page,
                                                mem, &init_mem_cgroup)) {
                        /* some confliction */
                        list_move(&pc->lru, list);
                        spin_unlock_irqrestore(&mz->lru_lock, flags);
                        yield();
                        spin_lock_irqsave(&mz->lru_lock, flags);
                }
                if (atomic_read(&mem->css.cgroup->count) > 0)
                        break;
        }
        spin_unlock_irqrestore(&mz->lru_lock, flags);
}
==

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ