[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x49si8iy959.fsf@segfault.boston.devel.redhat.com>
Date: Mon, 20 Jul 2015 14:30:10 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Benjamin LaHaise <bcrl@...ck.org>
Cc: Oleg Nesterov <oleg@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Joonsoo Kim <js1304@...il.com>,
Fengguang Wu <fengguang.wu@...el.com>,
Johannes Weiner <hannes@...xchg.org>,
Stephen Rothwell <sfr@...b.auug.org.au>,
linux-next@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm-move-mremap-from-file_operations-to-vm_operations_struct-fix
Benjamin LaHaise <bcrl@...ck.org> writes:
> As for accounting locked memory, we don't do that for memory pinned by
> O_DIRECT either. Given how small the amount of memory aio can pin is
> compared to O_DIRECT or mlock(), it is unlikely that the accounting of
> how much aio has pinned will make any real difference in the big picture.
> A single O_DIRECT i/o can pin megabytes of memory.
Actually, you can pin a lot of memory with aio. Worst case, aio-max-nr
represents the maximum number of pages you can lock in memory (assuming
you pass 1 to io_setup in a loop). So, for the default of 65536 events,
that comes to 256MB.
My system has libvirt installed, which changes the default to 1 million:
$ grep aio-max-nr /usr/lib/sysctl.d/libvirtd.conf
fs.aio-max-nr = 1048576
So, that means up to 4GB of memory can be tied up by aio. Oracle's
installation guide recommends the same.
The difference between the aio ring and something like DIO is that the
ring is long-lived. I/O, ideally, doesn't take that long, so that's a
poor comparison to make.
I tend to agree with Oleg, a system-wide setting just isn't a good fit
for this. Changing it now that it's in place is difficult, though, and
obviously low priority.
-Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists