[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120515174639.GA31752@kroah.com>
Date: Tue, 15 May 2012 10:46:39 -0700
From: Greg KH <greg@...ah.com>
To: Matthew Wilcox <willy@...ux.intel.com>
Cc: linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: NVM Mapping API
On Tue, May 15, 2012 at 09:34:51AM -0400, Matthew Wilcox wrote:
>
> There are a number of interesting non-volatile memory (NVM) technologies
> being developed. Some of them promise DRAM-comparable latencies and
> bandwidths. At Intel, we've been thinking about various ways to present
> those to software. This is a first draft of an API that supports the
> operations we see as necessary. Patches can follow easily enough once
> we've settled on an API.
>
> We think the appropriate way to present directly addressable NVM to
> in-kernel users is through a filesystem. Different technologies may want
> to use different filesystems, or maybe some forms of directly addressable
> NVM will want to use the same filesystem as each other.
>
> For mapping regions of NVM into the kernel address space, we think we need
> map, unmap, protect and sync operations; see kerneldoc for them below.
> We also think we need read and write operations (to copy to/from DRAM).
> The kernel_read() function already exists, and I don't think it would
> be unreasonable to add its kernel_write() counterpart.
>
> We aren't yet proposing a mechanism for carving up the NVM into regions.
> vfs_truncate() seems like a reasonable API for resizing an NVM region.
> filp_open() also seems reasonable for turning a name into a file pointer.
>
> What we'd really like is for people to think about how they might use
> fast NVM inside the kernel. There's likely to be a lot of it (at least in
> servers); all the technologies are promising cheaper per-bit prices than
> DRAM, so it's likely to be sold in larger capacities than DRAM is today.
>
> Caching is one obvious use (be it FS-Cache, Bcache, Flashcache or
> something else), but I bet there are more radical things we can do
> with it. What if we stored the inode cache in it? Would booting with
> a hot inode cache improve boot times? How about storing the tree of
> 'struct devices' in it so we don't have to rescan the busses at startup?
Rescanning the busses at startup are required anyway, as devices can be
added and removed when the power is off, and I would be amazed if that
is actually taking any measurable time. Do you have any numbers for
this for different busses?
What about pramfs for the nvram? I have a recent copy of the patches,
and I think they are clean enough for acceptance, there was no
complaints the last time it was suggested. Can you use that for this
type of hardware?
thanks,
greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists