[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHGf_=rHGotkPYJt65wv+ZDNeO2x+3c5sA8oJmGJX8ehsMHqoA@mail.gmail.com>
Date: Fri, 8 Jun 2012 03:10:35 -0400
From: KOSAKI Motohiro <kosaki.motohiro@...il.com>
To: leonid.moiseichuk@...ia.com
Cc: anton.vorontsov@...aro.org, penberg@...nel.org,
b.zolnierkie@...sung.com, john.stultz@...aro.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linaro-kernel@...ts.linaro.org, patches@...aro.org,
kernel-team@...roid.com
Subject: Re: [PATCH 2/5] vmevent: Convert from deferred timer to deferred work
On Fri, Jun 8, 2012 at 3:05 AM, <leonid.moiseichuk@...ia.com> wrote:
>> -----Original Message-----
>> From: ext Anton Vorontsov [mailto:anton.vorontsov@...aro.org]
>> Sent: 08 June, 2012 09:58
> ...
>> If you're saying that we should set up a timer in the userland and constantly
>> read /proc/vmstat, then we will cause CPU wake up every 100ms, which is
>> not acceptable. Well, we can try to introduce deferrable timers for the
>> userspace. But then it would still add a lot more overhead for our task, as this
>> solution adds other two context switches to read and parse /proc/vmstat. I
>> guess this is not a show-stopper though, so we can discuss this.
>>
>> Leonid, Pekka, what do you think about the idea?
>
> Seems to me not nice solution. Generating/parsing vmstat every 100ms plus wakeups it is what exactly should be avoid to have sense to API.
No. I don't suggest to wake up every 100ms. I suggest to integrate
existing subsystems. If you need any enhancement, just do it.
> It also will cause page trashing because user-space code could be pushed out from cache if VM decide.
This is completely unrelated issue. Even if notification code is not
swapped, userland notify handling code still may be swapped. So,
if you must avoid swap, you must use mlock.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists