[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEjAshpqjByUcJpmkTU4ukm3pPn0hDfRfB=Rh_vh81ajr9z1Tw@mail.gmail.com>
Date: Tue, 10 Dec 2019 07:19:57 +0100
From: SeongJae Park <sj38.park@...il.com>
To: Jürgen Groß <jgross@...e.com>
Cc: SeongJae Park <sjpark@...zon.com>, Jens Axboe <axboe@...nel.dk>,
konrad.wilk@...cle.com, linux-block@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>, pdurrant@...zon.com,
roger.pau@...rix.com, xen-devel@...ts.xenproject.org,
SeongJae Park <sjpark@...zon.de>
Subject: Re: [PATCH v4 1/2] xenbus/backend: Add memory pressure handler callback
On Tue, Dec 10, 2019 at 7:11 AM Jürgen Groß <jgross@...e.com> wrote:
>
> On 09.12.19 20:43, SeongJae Park wrote:
> > From: SeongJae Park <sjpark@...zon.de>
> >
> > Granting pages consumes backend system memory. In systems configured
> > with insufficient spare memory for those pages, it can cause a memory
> > pressure situation. However, finding the optimal amount of the spare
> > memory is challenging for large systems having dynamic resource
> > utilization patterns. Also, such a static configuration might lacks a
> > flexibility.
> >
> > To mitigate such problems, this commit adds a memory reclaim callback to
> > 'xenbus_driver'. Using this facility, 'xenbus' would be able to monitor
> > a memory pressure and request specific domains of specific backend
> > drivers which causing the given pressure to voluntarily release its
> > memory.
> >
> > That said, this commit simply requests every callback registered driver
> > to release its memory for every domain, rather than issueing the
> > requests to the drivers and domain in charge. Such things would be a
> > future work. Also, this commit focuses on memory only. However, it
> > would be ablt to be extended for general resources.
> >
> > Signed-off-by: SeongJae Park <sjpark@...zon.de>
> > ---
> > drivers/xen/xenbus/xenbus_probe_backend.c | 31 +++++++++++++++++++++++
> > include/xen/xenbus.h | 1 +
> > 2 files changed, 32 insertions(+)
> >
> > diff --git a/drivers/xen/xenbus/xenbus_probe_backend.c b/drivers/xen/xenbus/xenbus_probe_backend.c
> > index b0bed4faf44c..cd5fd1cd8de3 100644
> > --- a/drivers/xen/xenbus/xenbus_probe_backend.c
> > +++ b/drivers/xen/xenbus/xenbus_probe_backend.c
> > @@ -248,6 +248,34 @@ static int backend_probe_and_watch(struct notifier_block *notifier,
> > return NOTIFY_DONE;
> > }
> >
> > +static int xenbus_backend_reclaim(struct device *dev, void *data)
> > +{
> > + struct xenbus_driver *drv;
> > + if (!dev->driver)
> > + return -ENOENT;
> > + drv = to_xenbus_driver(dev->driver);
> > + if (drv && drv->reclaim)
> > + drv->reclaim(to_xenbus_device(dev), DOMID_INVALID);
> > + return 0;
> > +}
> > +
> > +/*
> > + * Returns 0 always because we are using shrinker to only detect memory
> > + * pressure.
> > + */
> > +static unsigned long xenbus_backend_shrink_count(struct shrinker *shrinker,
> > + struct shrink_control *sc)
> > +{
> > + bus_for_each_dev(&xenbus_backend.bus, NULL, NULL,
> > + xenbus_backend_reclaim);
> > + return 0;
> > +}
> > +
> > +static struct shrinker xenbus_backend_shrinker = {
> > + .count_objects = xenbus_backend_shrink_count,
> > + .seeks = DEFAULT_SEEKS,
> > +};
> > +
> > static int __init xenbus_probe_backend_init(void)
> > {
> > static struct notifier_block xenstore_notifier = {
> > @@ -264,6 +292,9 @@ static int __init xenbus_probe_backend_init(void)
> >
> > register_xenstore_notifier(&xenstore_notifier);
> >
> > + if (register_shrinker(&xenbus_backend_shrinker))
> > + pr_warn("shrinker registration failed\n");
> > +
> > return 0;
> > }
> > subsys_initcall(xenbus_probe_backend_init);
> > diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
> > index 869c816d5f8c..52aaf4f78400 100644
> > --- a/include/xen/xenbus.h
> > +++ b/include/xen/xenbus.h
> > @@ -104,6 +104,7 @@ struct xenbus_driver {
> > struct device_driver driver;
> > int (*read_otherend_details)(struct xenbus_device *dev);
> > int (*is_ready)(struct xenbus_device *dev);
> > + unsigned (*reclaim)(struct xenbus_device *dev, domid_t domid);
>
> Can you please add a comment here regarding semantics of specifying
> DOMID_INVALID as domid?
Yes, of course. Will do with the next version.
Thanks,
SeongJae Park
>
> Block maintainers, would you be fine with me carrying this series
> through the Xen tree?
>
>
> Juergen
Powered by blists - more mailing lists