[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150514080812.GC6433@dhcp22.suse.cz>
Date: Thu, 14 May 2015 10:08:12 +0200
From: Michal Hocko <mhocko@...e.cz>
To: Eric B Munson <emunson@...mai.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Shuah Khan <shuahkh@....samsung.com>,
linux-alpha@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mips@...ux-mips.org, linux-parisc@...r.kernel.org,
linuxppc-dev@...ts.ozlabs.org, sparclinux@...r.kernel.org,
linux-xtensa@...ux-xtensa.org, linux-mm@...ck.org,
linux-arch@...r.kernel.org, linux-api@...r.kernel.org
Subject: Re: [PATCH 0/3] Allow user to request memory to be locked on page
fault
On Wed 13-05-15 11:00:36, Eric B Munson wrote:
> On Mon, 11 May 2015, Eric B Munson wrote:
>
> > On Fri, 08 May 2015, Andrew Morton wrote:
> >
> > > On Fri, 8 May 2015 15:33:43 -0400 Eric B Munson <emunson@...mai.com> wrote:
> > >
> > > > mlock() allows a user to control page out of program memory, but this
> > > > comes at the cost of faulting in the entire mapping when it is
> > > > allocated. For large mappings where the entire area is not necessary
> > > > this is not ideal.
> > > >
> > > > This series introduces new flags for mmap() and mlockall() that allow a
> > > > user to specify that the covered are should not be paged out, but only
> > > > after the memory has been used the first time.
> > >
> > > Please tell us much much more about the value of these changes: the use
> > > cases, the behavioural improvements and performance results which the
> > > patchset brings to those use cases, etc.
> > >
> >
> > To illustrate the proposed use case I wrote a quick program that mmaps
> > a 5GB file which is filled with random data and accesses 150,000 pages
> > from that mapping. Setup and processing were timed separately to
> > illustrate the differences between the three tested approaches. the
> > setup portion is simply the call to mmap, the processing is the
> > accessing of the various locations in that mapping. The following
> > values are in milliseconds and are the averages of 20 runs each with a
> > call to echo 3 > /proc/sys/vm/drop_caches between each run.
> >
> > The first mapping was made with MAP_PRIVATE | MAP_LOCKED as a baseline:
> > Startup average: 9476.506
> > Processing average: 3.573
> >
> > The second mapping was simply MAP_PRIVATE but each page was passed to
> > mlock() before being read:
> > Startup average: 0.051
> > Processing average: 721.859
> >
> > The final mapping was MAP_PRIVATE | MAP_LOCKONFAULT:
> > Startup average: 0.084
> > Processing average: 42.125
> >
>
> Michal's suggestion of changing protections and locking in a signal
> handler was better than the locking as needed, but still significantly
> more work required than the LOCKONFAULT case.
>
> Startup average: 0.047
> Processing average: 86.431
Have you played with batching? Has it helped? Anyway it is to be
expected that the overhead will be higher than a single mmap call. The
question is whether you can live with it because adding a new semantic
to mlock sounds trickier and MAP_LOCKED is tricky enough already...
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists