lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Jan 2018 20:55:37 +0800
From:   Wei Wang <wei.w.wang@...el.com>
To:     Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
        "Michael S. Tsirkin" <mst@...hat.com>
CC:     virtio-dev@...ts.oasis-open.org, linux-kernel@...r.kernel.org,
        virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
        linux-mm@...ck.org, mhocko@...nel.org, akpm@...ux-foundation.org,
        pbonzini@...hat.com, liliang.opensource@...il.com,
        yang.zhang.wz@...il.com, quan.xu0@...il.com, nilal@...hat.com,
        riel@...hat.com
Subject: Re: [PATCH v24 2/2] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT

On 01/25/2018 07:28 PM, Tetsuo Handa wrote:
> On 2018/01/25 12:32, Wei Wang wrote:
>> On 01/25/2018 01:15 AM, Michael S. Tsirkin wrote:
>>> On Wed, Jan 24, 2018 at 06:42:42PM +0800, Wei Wang wrote:
>>> +
>>> +static void report_free_page_func(struct work_struct *work)
>>> +{
>>> +    struct virtio_balloon *vb;
>>> +    unsigned long flags;
>>> +
>>> +    vb = container_of(work, struct virtio_balloon, report_free_page_work);
>>> +
>>> +    /* Start by sending the obtained cmd id to the host with an outbuf */
>>> +    send_cmd_id(vb, &vb->start_cmd_id);
>>> +
>>> +    /*
>>> +     * Set start_cmd_id to VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID to
>>> +     * indicate a new request can be queued.
>>> +     */
>>> +    spin_lock_irqsave(&vb->stop_update_lock, flags);
>>> +    vb->start_cmd_id = cpu_to_virtio32(vb->vdev,
>>> +                VIRTIO_BALLOON_FREE_PAGE_REPORT_STOP_ID);
>>> +    spin_unlock_irqrestore(&vb->stop_update_lock, flags);
>>> +
>>> +    walk_free_mem_block(vb, 0, &virtio_balloon_send_free_pages);
>>> Can you teach walk_free_mem_block to return the && of all
>>> return calls, so caller knows whether it completed?
>> There will be two cases that can cause walk_free_mem_block to return without completing:
>> 1) host requests to stop in advance
>> 2) vq->broken
>>
>> How about letting walk_free_mem_block simply return the value returned by its callback (i.e. virtio_balloon_send_free_pages)?
>>
>> For host requests to stop, it returns "1", and the above only bails out when walk_free_mem_block return a "< 0" value.
> I feel that virtio_balloon_send_free_pages is doing too heavy things.
>
> It can be called for many times with IRQ disabled. Number of times
> it is called depends on amount of free pages (and fragmentation state).
> Generally, more free pages, more calls.
>
> Then, why don't you allocate some pages for holding all pfn values
> and then call walk_free_mem_block() only for storing pfn values
> and then send pfn values without disabling IRQ?

We have actually tried many methods for this feature before, and what 
you suggested is one of them, and you could also find the related 
discussion in earlier versions. In addition to the complexity of that 
method (if thinking deeper along that line), I can share the performance 
(the live migration time) comparison of that method with this one in 
this patch: ~405ms vs. ~260 ms.

The things that you worried about have also been discussed actually. The 
strategy is that we start with something fundamental and increase 
incrementally (if you check earlier versions, we also have a method 
which makes the lock finer granularity, but we decided to leave this to 
the future improvement for prudence purpose). If possible, please let 
Michael review this patch, he already knows all those things. We will 
finish this feature as soon as possible, and then discuss with you about 
another one if you want. Thanks.

Best,
Wei



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ