[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220914081419.GE28810@kitsune.suse.cz>
Date: Wed, 14 Sep 2022 10:14:19 +0200
From: Michal Suchánek <msuchanek@...e.de>
To: Nathan Lynch <nathanl@...ux.ibm.com>
Cc: Laurent Dufour <ldufour@...ux.ibm.com>,
Tyrel Datwyler <tyreld@...ux.ibm.com>,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
Michal Hocko <mhocko@...e.com>, Lee@...sune.suse.cz,
Chun-Yi <jlee@...e.com>
Subject: Re: [PATCH] powerpc/pseries: add lparctl driver for
platform-specific functions
On Tue, Sep 13, 2022 at 12:02:42PM -0500, Nathan Lynch wrote:
> Michal Suchánek <msuchanek@...e.de> writes:
> > On Tue, Sep 13, 2022 at 10:59:56AM -0500, Nathan Lynch wrote:
> >> Michal Suchánek <msuchanek@...e.de> writes:
> >>
> >> > On Fri, Aug 12, 2022 at 02:14:21PM -0500, Nathan Lynch wrote:
> >> >> Laurent Dufour <ldufour@...ux.ibm.com> writes:
> >> >> > Le 30/07/2022 à 02:04, Nathan Lynch a écrit :
> >> >> >> +static long lparctl_get_sysparm(struct lparctl_get_system_parameter __user *argp)
> >> >> >> +{
> >> >> >> + struct lparctl_get_system_parameter *gsp;
> >> >> >> + long ret;
> >> >> >> + int fwrc;
> >> >> >> +
> >> >> >> + /*
> >> >> >> + * Special case to allow user space to probe the command.
> >> >> >> + */
> >> >> >> + if (argp == NULL)
> >> >> >> + return 0;
> >> >> >> +
> >> >> >> + gsp = memdup_user(argp, sizeof(*gsp));
> >> >> >> + if (IS_ERR(gsp)) {
> >> >> >> + ret = PTR_ERR(gsp);
> >> >> >> + goto err_return;
> >> >> >> + }
> >> >> >> +
> >> >> >> + ret = -EINVAL;
> >> >> >> + if (gsp->rtas_status != 0)
> >> >> >> + goto err_free;
> >> >> >> +
> >> >> >> + do {
> >> >> >> + static_assert(sizeof(gsp->data) <= sizeof(rtas_data_buf));
> >> >> >> +
> >> >> >> + spin_lock(&rtas_data_buf_lock);
> >> >> >> + memset(rtas_data_buf, 0, sizeof(rtas_data_buf));
> >> >> >> + memcpy(rtas_data_buf, gsp->data, sizeof(gsp->data));
> >> >> >> + fwrc = rtas_call(rtas_token("ibm,get-system-parameter"), 3, 1,
> >> >> >> + NULL, gsp->token, __pa(rtas_data_buf),
> >> >> >> + sizeof(gsp->data));
> >> >> >> + if (fwrc == 0)
> >> >> >> + memcpy(gsp->data, rtas_data_buf, sizeof(gsp->data));
> >> >> >
> >> >> > May be the amount of data copied out to the user space could be
> >> >> > gsp->length. This would prevent copying 4K bytes all the time.
> >> >> >
> >> >> > In a more general way, the size of the RTAS buffer is quite big, and I'm
> >> >> > wondering if all the data need to be copied back and forth to the kernel.
> >> >> >
> >> >> > Unless there are a high frequency of calls this doesn't make sense, and
> >> >> > keeping the code simple might be the best way. Otherwise limiting the bytes
> >> >> > copied could help a bit.
> >> >>
> >> >> This is not intended to be a high-bandwidth interface and I don't think
> >> >> there's much of a performance concern here, so I'd rather just keep the
> >> >> copy sizes involved constant.
> >> >
> >> > But that's absolutely horrible!
> >>
> >> ?
> >>
> >> > The user wants the VPD data, all of it. And you only give one page with
> >> > this interface.
> >>
> >> The code here is for system parameters, which have a known maximum size,
> >> unlike VPD. There's no code for VPD retrieval in this patch.
> >
> > But we do need to support the calls that return multiple pages of data.
> >
> > If the new driver supports only the simple calls it's a failure.
>
> Michal, will you please moderate your tone? I think you can communicate
> your concerns without calling my work "absolutely horrible" or a
> "failure". Thanks.
Sorry, it's not a good wording.
> Anyway, of course I intend to support the more complex calls, but
> supporting the simple calls actually unbreaks a lot of stuff.
The thing is that supporting calls that return more than one page of
data is absolutely required, and this interface built around fixed size
data transfer can't do it.
So it sounds like a ticked for redoing the driver right after it's
implemented, or ending up with two subtly different interfaces - one for
the calls that can return multiple pages of data, and one for the simple
calls.
That does not sound like a good idea at all to me.
>
> >> But I'm happy to constructively discuss how a VPD ioctl interface should
> >> work.
> >>
> >> > Worse, the call is not reentrant so you need to lock against other users
> >> > calling the call while the current caller is retrieving the inidividual
> >> > pagaes.
> >> >
> >> > You could do that per process, but then processes with userspace
> >> > threading would want the data as well so you would have to save the
> >> > arguments of the last call, and compare to arguments of any subsequent
> >> > call to determine if you can let it pass or block.
> >> >
> >> > And when you do all that there will be a process that retrieves a couple
> >> > of pages and goes out for lunch or loses interest completely, blocking
> >> > out everyone from accessing the interface at all.
> >>
> >> Right, the ibm,get-vpd RTAS function is tricky to expose to user space.
> >>
> >> It needs to be called repeatedly until all data has been returned, 4KB
> >> at a time.
> >>
> >> Only one ibm,get-vpd sequence can be in progress at any time. If an
> >> ibm,get-vpd sequence is begun while another sequence is already
> >> outstanding, the first one is invalidated -- I would guess -1 or some
> >> other error is returned on its next call.
> >>
> >> So a new system-call level interface for VPD retrieval probably should
> >> not expose the repeating sequence-based nature of the RTAS function to
> >> user space, to prevent concurrent clients from interfering with each
> >> other. That implies that the kernel should buffer the VPD results
> >> internally; at least that's the only idea I've had so far. Open to
> >> other suggestions.
> >
> > It can save the data to an user-supplied buffer until all data is
> > transferred or the buffer space runs out.
>
> Yes, of course, thanks. Assuming user space can discover the appropriate
> buffer size, which should be possible.
It will not be entirely reliable because the data size may change over
time but assuming the performance is not an issue the caller can just
call again with a bigger buffer if the data hapens to grow at the very
moment they tried to retrieve it.
Thanks
Michal
Powered by blists - more mailing lists