[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54F7D7EA.6010303@redhat.com>
Date: Wed, 04 Mar 2015 22:13:30 -0600
From: Mike Christie <mchristi@...hat.com>
To: Mel Gorman <mgorman@...e.de>, Ilya Dryomov <idryomov@...il.com>
CC: ceph-devel@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
Sage Weil <sage@...hat.com>, NeilBrown <neilb@...e.de>,
netdev@...r.kernel.org
Subject: Re: SOCK_MEMALLOC vs loopback
On 03/04/2015 10:03 PM, Mike Christie wrote:
> On 03/04/2015 02:04 PM, Mel Gorman wrote:
>> > On Wed, Mar 04, 2015 at 09:38:48PM +0300, Ilya Dryomov wrote:
>>> >> Hello,
>>> >>
>>> >> A short while ago Mike added a patch to libceph to set SOCK_MEMALLOC on
>>> >> libceph sockets and PF_MEMALLOC around send/receive paths (commit
>>> >> 89baaa570ab0, "libceph: use memalloc flags for net IO"). rbd is much
>>> >> like nbd and is succeptible to all the same memory allocation
>>> >> deadlocks, so it seemed like a step in the right direction.
>>> >>
>> >
>> > The contract for SOCK_MEMALLOC is that it would only be used for temporary
>> > allocations that were necessary for the system to make forward progress. In
>> > the case of swap-over-NFS, it would only be used for transmitting
>> > buffers that were necessary to write data to swap when there were no
> Are upper layers like NFS/iSCSI/NBD/RBD supposed to know or track when
> there are no other options (for example if a GFP_ATOMIC allocation
> fails, then set the flags and retry the operation), or are they supposed
> to be able to set the flags, send IO and let the network layer handle it?
>
Oh yeah, maybe I misunderstood you. Were you just saying we should not
be using it for the configuration we are hitting the problem on?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists