[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191209102347.17337-1-sjpark@amazon.com>
Date: Mon, 9 Dec 2019 11:23:47 +0100
From: SeongJae Park <sjpark@...zon.com>
To: <jgross@...e.com>
CC: <linux-block@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<pdurrant@...zon.com>, <sj38.park@...il.com>,
<xen-devel@...ts.xenproject.org>
Subject: Re: Re: [PATCH v3 0/1] xen/blkback: Squeeze page pools if a memory pressure
On Mon, 9 Dec 2019 10:39:02 +0100 Juergen <jgross@...e.com> wrote:
>On 09.12.19 09:58, SeongJae Park wrote:
>> Each `blkif` has a free pages pool for the grant mapping. The size of
>> the pool starts from zero and be increased on demand while processing
>> the I/O requests. If current I/O requests handling is finished or 100
>> milliseconds has passed since last I/O requests handling, it checks and
>> shrinks the pool to not exceed the size limit, `max_buffer_pages`.
>>
>> Therefore, `blkfront` running guests can cause a memory pressure in the
>> `blkback` running guest by attaching a large number of block devices and
>> inducing I/O.
>
>I'm having problems to understand how a guest can attach a large number
>of block devices without those having been configured by the host admin
>before.
>
>If those devices have been configured, dom0 should be ready for that
>number of devices, e.g. by having enough spare memory area for ballooned
>pages.
As mentioned in the original message as below, administrators _can_ avoid this
problem, but finding the optimal configuration is hard, especially if the
number of the guests is large.
System administrators can avoid such problematic situations by limiting
the maximum number of devices each guest can attach. However, finding
the optimal limit is not so easy. Improper set of the limit can
results in the memory pressure or a resource underutilization.
Thanks,
SeongJae Park
>
>So either I'm missing something here or your reasoning for the need of
>the patch is wrong.
>
>
>Juergen
>
Powered by blists - more mailing lists