[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5704BA37.2080508@kyup.com>
Date: Wed, 6 Apr 2016 10:26:47 +0300
From: Nikolay Borisov <kernel@...p.com>
To: David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch] mm, hugetlb_cgroup: round limit_in_bytes down to hugepage
size
On 04/06/2016 04:25 AM, David Rientjes wrote:
> The page_counter rounds limits down to page size values. This makes
> sense, except in the case of hugetlb_cgroup where it's not possible to
> charge partial hugepages.
>
> Round the hugetlb_cgroup limit down to hugepage size.
>
> Signed-off-by: David Rientjes <rientjes@...gle.com>
> ---
> mm/hugetlb_cgroup.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -288,6 +288,7 @@ static ssize_t hugetlb_cgroup_write(struct kernfs_open_file *of,
>
> switch (MEMFILE_ATTR(of_cft(of)->private)) {
> case RES_LIMIT:
> + nr_pages &= ~((1 << huge_page_order(&hstates[idx])) - 1);
Why not:
nr_pages = round_down(nr_pages, huge_page_order(&hstates[idx]));
> mutex_lock(&hugetlb_limit_mutex);
> ret = page_counter_limit(&h_cg->hugepage[idx], nr_pages);
> mutex_unlock(&hugetlb_limit_mutex);
>
Powered by blists - more mailing lists