[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160704032310.GB9895@leo-test>
Date: Mon, 4 Jul 2016 11:23:10 +0800
From: Ganesh Mahendran <opensource.ganesh@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, ngupta@...are.org,
sergey.senozhatsky.work@...il.com, rostedt@...dmis.org,
mingo@...hat.com
Subject: Re: [PATCH 3/8] mm/zsmalloc: take obj index back from
find_alloced_obj
On Mon, Jul 04, 2016 at 08:57:04AM +0900, Minchan Kim wrote:
> On Fri, Jul 01, 2016 at 02:41:01PM +0800, Ganesh Mahendran wrote:
> > the obj index value should be updated after return from
> > find_alloced_obj()
>
> to avoid CPU buring caused by unnecessary object scanning.
>
> Description should include what's the goal.
Thanks for your reminder.
>
> >
> > Signed-off-by: Ganesh Mahendran <opensource.ganesh@...il.com>
> > ---
> > mm/zsmalloc.c | 13 ++++++++-----
> > 1 file changed, 8 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > index 405baa5..5c96ed1 100644
> > --- a/mm/zsmalloc.c
> > +++ b/mm/zsmalloc.c
> > @@ -1744,15 +1744,16 @@ static void zs_object_copy(struct size_class *class, unsigned long dst,
> > * return handle.
> > */
> > static unsigned long find_alloced_obj(struct size_class *class,
> > - struct page *page, int index)
> > + struct page *page, int *index)
> > {
> > unsigned long head;
> > int offset = 0;
> > + int objidx = *index;
>
> Nit:
>
> We have used obj_idx so I prefer it for consistency with others.
will do it.
>
> Suggestion:
> Could you mind changing index in zs_compact_control and
> migrate_zspage with obj_idx in this chance?
I will add a clean up patch in this patchset.
>
> Strictly speaking, such clean up is separate patch but I don't mind
> mixing them here(Of course, you will send it as another clean up patch,
> it would be better). If you mind, just let it leave as is. Sometime,
> I wil do it.
>
> > unsigned long handle = 0;
> > void *addr = kmap_atomic(page);
> >
> > offset = get_first_obj_offset(page);
> > - offset += class->size * index;
> > + offset += class->size * objidx;
> >
> > while (offset < PAGE_SIZE) {
> > head = obj_to_head(page, addr + offset);
> > @@ -1764,9 +1765,11 @@ static unsigned long find_alloced_obj(struct size_class *class,
> > }
> >
> > offset += class->size;
> > - index++;
> > + objidx++;
> > }
> >
> > + *index = objidx;
>
> We can do this out of kmap section right before returing handle.
That's right. I will send a V2 patch soon.
Thanks.
>
> Thanks!
>
> > +
> > kunmap_atomic(addr);
> > return handle;
> > }
> > @@ -1794,11 +1797,11 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> > unsigned long handle;
> > struct page *s_page = cc->s_page;
> > struct page *d_page = cc->d_page;
> > - unsigned long index = cc->index;
> > + unsigned int index = cc->index;
> > int ret = 0;
> >
> > while (1) {
> > - handle = find_alloced_obj(class, s_page, index);
> > + handle = find_alloced_obj(class, s_page, &index);
> > if (!handle) {
> > s_page = get_next_page(s_page);
> > if (!s_page)
> > --
> > 1.9.1
> >
Powered by blists - more mailing lists