FAQ
hi guys,

Just wondering if anyone has done any performance testing between kvm
and xen on CentOS-5 ( using centos as host and vm in every case ) ?

Regards,

--
Karanbir Singh
London, UK | http://www.karan.org/ | twitter.com/kbsingh
ICQ: 2522219 | Yahoo IM: z00dax | Gtalk: z00dax
GnuPG Key : http://www.karan.org/publickey.asc

Search Discussions

  • Tom Bishop at Oct 14, 2010 at 8:48 am
    I don't have any benchmarks per se just my recent testing of them....

    I think xen is still on top in terms of performance and features....now
    having said that my experience in the past with kvm and my latest testing
    with 5.5 and KVM I can say that KVM has made great strides with the virtio
    drivers for the disk and nic, my latest vm's that I am using with those
    drivers are very snappy and so far I am very pleased.....I'm a big redhat
    fan and think that KVM will only continue to get better and I believe at
    some point will equal or pass xen....I'm waiting for rhel6 to see what they
    are going to be rolling out...I have not had any time to play with the beta
    or look at fedora....
    On Thu, Oct 14, 2010 at 11:32 AM, Karanbir Singh wrote:

    hi guys,

    Just wondering if anyone has done any performance testing between kvm
    and xen on CentOS-5 ( using centos as host and vm in every case ) ?

    Regards,

    --
    Karanbir Singh
    London, UK | http://www.karan.org/ | twitter.com/kbsingh
    ICQ: 2522219 | Yahoo IM: z00dax | Gtalk: z00dax
    GnuPG Key : http://www.karan.org/publickey.asc
    _______________________________________________
    CentOS-virt mailing list
    CentOS-virt at centos.org
    http://lists.centos.org/mailman/listinfo/centos-virt
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101014/f0669440/attachment.html
  • Karanbir Singh at Oct 14, 2010 at 2:59 pm

    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    having said that my experience in the past with kvm and my latest
    testing with 5.5 and KVM I can say that KVM has made great strides with
    the virtio drivers for the disk and nic, my latest vm's that I am using
    with those drivers are very snappy and so far I am very pleased.....I'm
    I will try and get a machine up with centos dom0 and do some metrics.

    - KB
  • Edgar Rodolfo at Oct 14, 2010 at 9:05 am
    2010/10/14 Karanbir Singh <mail-lists at karan.org>
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    having said that my experience in the past with kvm and my latest
    testing with 5.5 and KVM I can say that KVM has made great strides with
    the virtio drivers for the disk and nic, my latest vm's that I am using
    with those drivers are very snappy and so far I am very pleased.....I'm
    I will try and get a machine up with centos dom0 and do some metrics.

    - KB
    _______________________________________________
    CentOS-virt mailing list
    CentOS-virt at centos.org
    http://lists.centos.org/mailman/listinfo/centos-virt

    Not all the microprocessor support virtualization with kvm
    --
    ? ? ? ? ? ? ? ?
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101014/af8685c3/attachment.html
  • Tom Bishop at Oct 14, 2010 at 9:15 am
    When you get the numbers please share, as I for one would be very
    interested....I have read some on the web but nothing as of late.....I just
    don't have the time right now to go benchmark anything....
    On Thu, Oct 14, 2010 at 1:59 PM, Karanbir Singh wrote:
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    having said that my experience in the past with kvm and my latest
    testing with 5.5 and KVM I can say that KVM has made great strides with
    the virtio drivers for the disk and nic, my latest vm's that I am using
    with those drivers are very snappy and so far I am very pleased.....I'm
    I will try and get a machine up with centos dom0 and do some metrics.

    - KB
    _______________________________________________
    CentOS-virt mailing list
    CentOS-virt at centos.org
    http://lists.centos.org/mailman/listinfo/centos-virt
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101014/e361cde4/attachment-0001.html
  • Bart Swedrowski at Oct 14, 2010 at 9:16 am
    Hi Karanbir,
    On 14 October 2010 19:59, Karanbir Singh wrote:
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    I have done some testing some time ago on one of the EQ machines that
    I got from hetzner.de. ?Full spec of the machine was as following:

    * Intel? Core? i7-920
    * 8 GB DDR3 RAM
    * 2 x 750 GB SATA-II HDD

    It's nothing big but even though results are quite interesting. All
    tests were performed on CentOS 5.5 x86_64 with PostgreSQL 8.4 (from
    CentOS repos).

    I have run some PostgreSQL PGBench tests as well as Bonnie++ tests.
    The PostgreSQL tests was divided into two tests having three goes (to
    get an idea of average). The commands I used for testing were:
    dropdb pgbench && sync && sleep 3 && createdb pgbench && sync && sleep 3
    pgbench -i -s 100 -U postgres -d pgbench && sync && sleep 3
    pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync \
    && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null \
    && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null \
    && sync && sleep 3
    Now results. First CentOS5/x86_64 without any virtualisation, without
    any PostgreSQL optimisation:

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 141.191292 (including connections establishing)
    tps = 141.196776 (excluding connections establishing)
    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 156.479561 (including connections establishing)
    tps = 156.486222 (excluding connections establishing)
    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 164.880109 (including connections establishing)
    tps = 164.888009 (excluding connections establishing)

    Now after optimisation (shared_buffers, effective_cache_size etc.):

    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 403.430951 (including connections establishing)
    tps = 403.474562 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 336.060764 (including connections establishing)
    tps = 336.093214 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 446.607705 (including connections establishing)
    tps = 446.664466 (excluding connections establishing)

    Now KVM based VM with 7GB RAM and 8 CPUs. Using virtio and LVM
    partitions as backend.

    PostgreSQL results *w/o* optimisation.

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench
    2>/dev/null && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U
    postgres -d pgbench 2>/dev/null && sync && sleep 3 && pgbench -c 10 -t
    5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync && sleep 3
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 124.578488 (including connections establishing)
    tps = 124.585776 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 140.451736 (including connections establishing)
    tps = 140.463105 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 148.091563 (including connections establishing)
    tps = 148.102254 (excluding connections establishing)

    PostgreSQL tests *with* optimisation:

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench
    2>/dev/null && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U
    postgres -d pgbench 2>/dev/null && sync && sleep 3 && pgbench -c 10 -t
    5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync && sleep 3
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 230.695831 (including connections establishing)
    tps = 230.734357 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 207.535243 (including connections establishing)
    tps = 207.572818 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 206.664120 (including connections establishing)
    tps = 206.695176 (excluding connections establishing)

    And, finally, Xen based VM with 7GB RAM and 8 CPUs. Using LVM
    partitions as backend. PostgreSQL tests results *w/o* optimisation:

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench
    2>/dev/null && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U
    postgres -d pgbench 2>/dev/null && sync && sleep 3 && pgbench -c 10 -t
    5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync && sleep 3
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 126.554719 (including connections establishing)
    tps = 126.562829 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 135.472197 (including connections establishing)
    tps = 135.481690 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench

    ... and *with* optimisation:

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench
    2>/dev/null && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U
    postgres -d pgbench 2>/dev/null && sync && sleep 3 && pgbench -c 10 -t
    5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync && sleep 3
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 312.133362 (including connections establishing)
    tps = 312.186309 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 203.123398 (including connections establishing)
    tps = 203.146153 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 279.864975 (including connections establishing)
    tps = 279.910306 (excluding connections establishing)
    From my tests it came out that Xen still outperforms KVM especially in
    disk IO performance. As so and fact my applications are dependant on
    databases (and which are not these days?) I have kept on using Xen.

    Other thing that I like about Xen is you can easily mount LVM
    partitions to the host OS and play with them should you need to. In
    KVM you've got this double LVM layer which makes it difficult to do
    this. I know there are ways around it but it just seems a bit more
    easier and straight forward in Xen at the moment.

    Regards.
  • Pasi Kärkkäinen at Oct 16, 2010 at 2:11 pm

    On Thu, Oct 14, 2010 at 02:16:42PM +0100, Bart Swedrowski wrote:
    Hi Karanbir,
    On 14 October 2010 19:59, Karanbir Singh wrote:
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    I have done some testing some time ago on one of the EQ machines that
    I got from hetzner.de. ?Full spec of the machine was as following:

    * Intel? Core??? i7-920
    * 8 GB DDR3 RAM
    * 2 x 750 GB SATA-II HDD

    It's nothing big but even though results are quite interesting. All
    tests were performed on CentOS 5.5 x86_64 with PostgreSQL 8.4 (from
    CentOS repos).
    Note that 64bit Xen guests should be HVM, not PV, for best performance.
    Xen HVM guests obviously still need to have PV-on-HVM drivers installed.

    32bit Xen guests can be PV.

    -- Pasi
    I have run some PostgreSQL PGBench tests as well as Bonnie++ tests.
    The PostgreSQL tests was divided into two tests having three goes (to
    get an idea of average). The commands I used for testing were:
    dropdb pgbench && sync && sleep 3 && createdb pgbench && sync && sleep 3
    pgbench -i -s 100 -U postgres -d pgbench && sync && sleep 3
    pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync \
    && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null \
    && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null \
    && sync && sleep 3
    Now results. First CentOS5/x86_64 without any virtualisation, without
    any PostgreSQL optimisation:

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 141.191292 (including connections establishing)
    tps = 141.196776 (excluding connections establishing)
    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 156.479561 (including connections establishing)
    tps = 156.486222 (excluding connections establishing)
    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench 2>/dev/null
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 164.880109 (including connections establishing)
    tps = 164.888009 (excluding connections establishing)

    Now after optimisation (shared_buffers, effective_cache_size etc.):

    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 403.430951 (including connections establishing)
    tps = 403.474562 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 336.060764 (including connections establishing)
    tps = 336.093214 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 446.607705 (including connections establishing)
    tps = 446.664466 (excluding connections establishing)

    Now KVM based VM with 7GB RAM and 8 CPUs. Using virtio and LVM
    partitions as backend.

    PostgreSQL results *w/o* optimisation.

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench
    2>/dev/null && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U
    postgres -d pgbench 2>/dev/null && sync && sleep 3 && pgbench -c 10 -t
    5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync && sleep 3
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 124.578488 (including connections establishing)
    tps = 124.585776 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 140.451736 (including connections establishing)
    tps = 140.463105 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 148.091563 (including connections establishing)
    tps = 148.102254 (excluding connections establishing)

    PostgreSQL tests *with* optimisation:

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench
    2>/dev/null && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U
    postgres -d pgbench 2>/dev/null && sync && sleep 3 && pgbench -c 10 -t
    5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync && sleep 3
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 230.695831 (including connections establishing)
    tps = 230.734357 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 207.535243 (including connections establishing)
    tps = 207.572818 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 206.664120 (including connections establishing)
    tps = 206.695176 (excluding connections establishing)

    And, finally, Xen based VM with 7GB RAM and 8 CPUs. Using LVM
    partitions as backend. PostgreSQL tests results *w/o* optimisation:

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench
    2>/dev/null && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U
    postgres -d pgbench 2>/dev/null && sync && sleep 3 && pgbench -c 10 -t
    5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync && sleep 3
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 126.554719 (including connections establishing)
    tps = 126.562829 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 135.472197 (including connections establishing)
    tps = 135.481690 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench

    ... and *with* optimisation:

    -bash-3.2$ pgbench -c 10 -t 5000 -s 100 -U postgres -d pgbench
    2>/dev/null && sync && sleep 3 && pgbench -c 10 -t 5000 -s 100 -U
    postgres -d pgbench 2>/dev/null && sync && sleep 3 && pgbench -c 10 -t
    5000 -s 100 -U postgres -d pgbench 2>/dev/null && sync && sleep 3
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 312.133362 (including connections establishing)
    tps = 312.186309 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 203.123398 (including connections establishing)
    tps = 203.146153 (excluding connections establishing)
    pghost: pgport: nclients: 10 nxacts: 5000 dbName: pgbench
    transaction type: TPC-B (sort of)
    scaling factor: 100
    number of clients: 10
    number of transactions per client: 5000
    number of transactions actually processed: 50000/50000
    tps = 279.864975 (including connections establishing)
    tps = 279.910306 (excluding connections establishing)

    From my tests it came out that Xen still outperforms KVM especially in
    disk IO performance. As so and fact my applications are dependant on
    databases (and which are not these days?) I have kept on using Xen.

    Other thing that I like about Xen is you can easily mount LVM
    partitions to the host OS and play with them should you need to. In
    KVM you've got this double LVM layer which makes it difficult to do
    this. I know there are ways around it but it just seems a bit more
    easier and straight forward in Xen at the moment.

    Regards.
    _______________________________________________
    CentOS-virt mailing list
    CentOS-virt at centos.org
    http://lists.centos.org/mailman/listinfo/centos-virt
  • Dennis Jacobfeuerborn at Oct 16, 2010 at 3:58 pm

    On 10/16/2010 08:11 PM, Pasi K?rkk?inen wrote:
    On Thu, Oct 14, 2010 at 02:16:42PM +0100, Bart Swedrowski wrote:
    Hi Karanbir,

    On 14 October 2010 19:59, Karanbir Singhwrote:
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    I have done some testing some time ago on one of the EQ machines that
    I got from hetzner.de. Full spec of the machine was as following:

    * Intel? Core??? i7-920
    * 8 GB DDR3 RAM
    * 2 x 750 GB SATA-II HDD

    It's nothing big but even though results are quite interesting. All
    tests were performed on CentOS 5.5 x86_64 with PostgreSQL 8.4 (from
    CentOS repos).
    Note that 64bit Xen guests should be HVM, not PV, for best performance.
    Xen HVM guests obviously still need to have PV-on-HVM drivers installed.

    32bit Xen guests can be PV.
    Hm, why would HVM be faster than PV for 64 bit guests?

    Regards,
    Dennis
  • Grant McWilliams at Oct 17, 2010 at 1:44 pm

    On Sat, Oct 16, 2010 at 12:58 PM, Dennis Jacobfeuerborn wrote:
    On 10/16/2010 08:11 PM, Pasi K?rkk?inen wrote:
    On Thu, Oct 14, 2010 at 02:16:42PM +0100, Bart Swedrowski wrote:
    Hi Karanbir,

    On 14 October 2010 19:59, Karanbir Singhwrote:
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and
    features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    I have done some testing some time ago on one of the EQ machines that
    I got from hetzner.de. Full spec of the machine was as following:
    Note that 64bit Xen guests should be HVM, not PV, for best performance.
    Xen HVM guests obviously still need to have PV-on-HVM drivers installed.

    32bit Xen guests can be PV.
    Hm, why would HVM be faster than PV for 64 bit guests?

    Regards,
    Dennis
    lol, there's seems to be a lot of hearsay surrounding performance and Xen.



    Grant McWilliams

    Some people, when confronted with a problem, think "I know, I'll use
    Windows."
    Now they have two problems.
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101017/945edfca/attachment.html
  • Pasi Kärkkäinen at Oct 19, 2010 at 3:41 am

    On Sat, Oct 16, 2010 at 09:58:15PM +0200, Dennis Jacobfeuerborn wrote:
    On 10/16/2010 08:11 PM, Pasi K?rkk?inen wrote:
    On Thu, Oct 14, 2010 at 02:16:42PM +0100, Bart Swedrowski wrote:
    Hi Karanbir,

    On 14 October 2010 19:59, Karanbir Singhwrote:
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    I have done some testing some time ago on one of the EQ machines that
    I got from hetzner.de. Full spec of the machine was as following:

    * Intel? Core??? i7-920
    * 8 GB DDR3 RAM
    * 2 x 750 GB SATA-II HDD

    It's nothing big but even though results are quite interesting. All
    tests were performed on CentOS 5.5 x86_64 with PostgreSQL 8.4 (from
    CentOS repos).
    Note that 64bit Xen guests should be HVM, not PV, for best performance.
    Xen HVM guests obviously still need to have PV-on-HVM drivers installed.

    32bit Xen guests can be PV.
    Hm, why would HVM be faster than PV for 64 bit guests?
    It's because of the x86_64 architecture, afaik.

    There was some good technical explananation about it,
    but I can't remember the url now.

    -- Pasi
  • Dennis Jacobfeuerborn at Oct 19, 2010 at 6:47 am

    On 10/19/2010 09:41 AM, Pasi K?rkk?inen wrote:
    On Sat, Oct 16, 2010 at 09:58:15PM +0200, Dennis Jacobfeuerborn wrote:
    On 10/16/2010 08:11 PM, Pasi K?rkk?inen wrote:
    On Thu, Oct 14, 2010 at 02:16:42PM +0100, Bart Swedrowski wrote:
    Hi Karanbir,

    On 14 October 2010 19:59, Karanbir Singhwrote:
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    I have done some testing some time ago on one of the EQ machines that
    I got from hetzner.de. Full spec of the machine was as following:

    * Intel? Core??? i7-920
    * 8 GB DDR3 RAM
    * 2 x 750 GB SATA-II HDD

    It's nothing big but even though results are quite interesting. All
    tests were performed on CentOS 5.5 x86_64 with PostgreSQL 8.4 (from
    CentOS repos).
    Note that 64bit Xen guests should be HVM, not PV, for best performance.
    Xen HVM guests obviously still need to have PV-on-HVM drivers installed.

    32bit Xen guests can be PV.
    Hm, why would HVM be faster than PV for 64 bit guests?
    It's because of the x86_64 architecture, afaik.

    There was some good technical explananation about it,
    but I can't remember the url now.
    In that case I'll have to call this advice extremely bogus and you probably
    should refrain from passing it on. The only way I can see this being true
    is some weird corner case.

    Regards,
    Dennis
  • Jerry Franz at Oct 19, 2010 at 7:16 am

    On 10/19/2010 03:47 AM, Dennis Jacobfeuerborn wrote:
    On 10/19/2010 09:41 AM, Pasi K?rkk?inen wrote:

    It's because of the x86_64 architecture, afaik.

    There was some good technical explananation about it,
    but I can't remember the url now.
    In that case I'll have to call this advice extremely bogus and you probably
    should refrain from passing it on. The only way I can see this being true
    is some weird corner case.
    There appear to be some interactions with the Intel VT-d processor features.

    http://www.xen.org/files/xensummit_intel09/xensummit2009_IOVirtPerf.pdf

    If I understand that paper correctly, HVM+VT-d outperforms PV by quite a
    lot (if you have VT-d support on your system).

    --
    Benjamin Franz
  • Lucas Timm LH at Oct 19, 2010 at 8:21 am
    Hi guys,

    Sorry by late reading this thread. I had already compared Xen Source with
    KVM (and btw, VMware Server 2) in CentOS 5.2 days (2008) for a college job.
    At this time Xen have more performance than the others, but I'd tested in my
    desktop: A Core 2 Duo E6420 with 3GB Ram. All the papers I'd used is
    available in brazilian portuguese, if anyone is interesed I would send. But
    *IMHO* isn't interesting translating the job to english because, as I said,
    I did it in CentOS 5.2 (2008) and a *LOT* things has changed in all this
    hypervisors.

    Anyway, here in my job I have two IBMs x3400 (two Xeon 5504 sockets with 8GB
    Ram and 2 x 160GB [SAS? SATA? I really don't know]) available, and they
    works perfect with Xen and KVM (both worked as Dom0 before, so we replaced
    them for rack mounted machines). I really like x86 virtualization, so I can
    test the hypervisors again in this real servers and write about it in my
    free time, but I need a metodology to follow up and do a benchmark. (I'm not
    good in performance metter). If someone else is interesed in do it with me,
    will be really fun.



    2010/10/19 Jerry Franz <jfranz at freerun.com>
    On 10/19/2010 03:47 AM, Dennis Jacobfeuerborn wrote:
    On 10/19/2010 09:41 AM, Pasi K?rkk?inen wrote:

    It's because of the x86_64 architecture, afaik.

    There was some good technical explananation about it,
    but I can't remember the url now.
    In that case I'll have to call this advice extremely bogus and you probably
    should refrain from passing it on. The only way I can see this being true
    is some weird corner case.
    There appear to be some interactions with the Intel VT-d processor
    features.

    http://www.xen.org/files/xensummit_intel09/xensummit2009_IOVirtPerf.pdf

    If I understand that paper correctly, HVM+VT-d outperforms PV by quite a
    lot (if you have VT-d support on your system).

    --
    Benjamin Franz
    _______________________________________________
    CentOS-virt mailing list
    CentOS-virt at centos.org
    http://lists.centos.org/mailman/listinfo/centos-virt


    --
    Lucas Timm, Goi?nia/GO.
    http://timmerman.wordpress.com

    (62) 8198-0867
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101019/efb20beb/attachment.html
  • Dennis Jacobfeuerborn at Oct 19, 2010 at 8:32 am

    On 10/19/2010 01:16 PM, Jerry Franz wrote:
    On 10/19/2010 03:47 AM, Dennis Jacobfeuerborn wrote:
    On 10/19/2010 09:41 AM, Pasi K?rkk?inen wrote:

    It's because of the x86_64 architecture, afaik.

    There was some good technical explananation about it,
    but I can't remember the url now.
    In that case I'll have to call this advice extremely bogus and you probably
    should refrain from passing it on. The only way I can see this being true
    is some weird corner case.
    There appear to be some interactions with the Intel VT-d processor features.

    http://www.xen.org/files/xensummit_intel09/xensummit2009_IOVirtPerf.pdf

    If I understand that paper correctly, HVM+VT-d outperforms PV by quite a
    lot (if you have VT-d support on your system).
    Thanks for that link. Just to make my criticism of the initial claim more
    clear: I don't claim that HVM can never be faster than PV but that you need
    to understand when exactly this is the case. For example I'm not sure that
    x86_64 vs. x86 really enters into this but I can definitely see VT-d making
    an impact there.

    Regards,
    Dennis
  • Grant McWilliams at Oct 20, 2010 at 2:12 am

    If I understand that paper correctly, HVM+VT-d outperforms PV by quite a
    lot (if you have VT-d support on your system).
    Thanks for that link. Just to make my criticism of the initial claim more
    clear: I don't claim that HVM can never be faster than PV but that you need
    to understand when exactly this is the case. For example I'm not sure that
    x86_64 vs. x86 really enters into this but I can definitely see VT-d making
    an impact there.

    Regards,
    Dennis

    Even though this is Intel talking I'd still be very sceptical of getting
    those numbers since this is quite the opposite of what I've seen.
    Maybe the vt-d is getting good enough to actually accelerate IO operations
    but even so that would only happen on the latest hardware.

    I will say that Xen has a really long packet path though.

    Grant McWilliams
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101019/ee225ea4/attachment.html
  • Dennis Jacobfeuerborn at Oct 20, 2010 at 7:35 am

    On 10/20/2010 08:12 AM, Grant McWilliams wrote:
    If I understand that paper correctly, HVM+VT-d outperforms PV by quite a
    lot (if you have VT-d support on your system).
    Thanks for that link. Just to make my criticism of the initial claim more
    clear: I don't claim that HVM can never be faster than PV but that you need
    to understand when exactly this is the case. For example I'm not sure that
    x86_64 vs. x86 really enters into this but I can definitely see VT-d making
    an impact there.

    Regards,
    Dennis



    Even though this is Intel talking I'd still be very sceptical of getting
    those numbers since this is quite the opposite of what I've seen.
    Maybe the vt-d is getting good enough to actually accelerate IO operations
    but even so that would only happen on the latest hardware.

    I will say that Xen has a really long packet path though.
    Being skeptical is the best approach in the absence of
    verifiable/falsifiable data. Today or tomorrow I'll get my hands on a new
    host system and although it is supposed to go into production immediately I
    will probably find some time to do some rudimentary benchmarking in that
    regard to see if this is worth investigating further. Right now I'm
    planning to use fio for block device measurements but don't know any decent
    (and uncomplicated) network i/o benchmarking tools. Any ideas what tools I
    could use to quickly get some useful data on this from the machine?

    Regards,
    Dennis
  • Karanbir Singh at Oct 20, 2010 at 7:52 am

    On 10/20/2010 12:35 PM, Dennis Jacobfeuerborn wrote:
    Being skeptical is the best approach in the absence of
    verifiable/falsifiable data. Today or tomorrow I'll get my hands on a new
    host system and although it is supposed to go into production immediately I
    will probably find some time to do some rudimentary benchmarking in that
    regard to see if this is worth investigating further. Right now I'm
    That sounds great. I've got a machine coming online in the next few days
    as well and will do some testing on there. Its got 2 of these :

    Intel(R) Xeon(R) CPU E5310

    So not the newest/greatest, but should be fairly representative.
    planning to use fio for block device measurements but don't know any decent
    (and uncomplicated) network i/o benchmarking tools. Any ideas what tools I
    could use to quickly get some useful data on this from the machine?
    iozone and openssl speed tests are always a good thing to run as a 'warm
    up' to your app level testing. Since pgtest has been posted here
    already, I'd say that is definitely one thing to include so it creates a
    level of common-code-testing and comparison. mysql-bench is worth
    hitting as well. I have a personal interest in web app delivery, so a
    apache-bench hosted from an external machine hitting domU's / VM's ( but
    more than 1 instance, and hitting more than 1 VM / domU at the same time
    ) would be good to have as well.

    And yes, publish lots of machine details and also details on the code /
    platform / versions used. I will try to do the same ( but will limit my
    testing to whats already available in the distro )

    thanks

    - KB
  • Karanbir Singh at Oct 20, 2010 at 9:13 am

    On 10/20/2010 12:52 PM, Karanbir Singh wrote:
    iozone and openssl speed tests are always a good thing to run as a 'warm
    iperf is another good tool to benchmark against, just make sure you run
    dual direction, and use a second real-iron box very closeby.

    - KB
  • Tom Bishop at Oct 20, 2010 at 9:24 am
    Ok so I'd like to help, since most folks have Intel Chipsets, I have a AMD
    4p(16 core)/32gig memory opteron server that I'm running that we can get
    some numbers on....but it would be nice if we could run apples to apples...I
    have iozone loaded and can run that but would be nice to run using the same
    parameters....is there any way we could list the types of test we would like
    to run and the actual command with options listed and then we would have
    some thing to compare at least level the playing field...KB, any thoughts,
    is this a good idea?
    On Wed, Oct 20, 2010 at 6:52 AM, Karanbir Singh wrote:
    On 10/20/2010 12:35 PM, Dennis Jacobfeuerborn wrote:
    Being skeptical is the best approach in the absence of
    verifiable/falsifiable data. Today or tomorrow I'll get my hands on a new
    host system and although it is supposed to go into production immediately I
    will probably find some time to do some rudimentary benchmarking in that
    regard to see if this is worth investigating further. Right now I'm
    That sounds great. I've got a machine coming online in the next few days
    as well and will do some testing on there. Its got 2 of these :

    Intel(R) Xeon(R) CPU E5310

    So not the newest/greatest, but should be fairly representative.
    planning to use fio for block device measurements but don't know any decent
    (and uncomplicated) network i/o benchmarking tools. Any ideas what tools I
    could use to quickly get some useful data on this from the machine?
    iozone and openssl speed tests are always a good thing to run as a 'warm
    up' to your app level testing. Since pgtest has been posted here
    already, I'd say that is definitely one thing to include so it creates a
    level of common-code-testing and comparison. mysql-bench is worth
    hitting as well. I have a personal interest in web app delivery, so a
    apache-bench hosted from an external machine hitting domU's / VM's ( but
    more than 1 instance, and hitting more than 1 VM / domU at the same time
    ) would be good to have as well.

    And yes, publish lots of machine details and also details on the code /
    platform / versions used. I will try to do the same ( but will limit my
    testing to whats already available in the distro )

    thanks

    - KB
    _______________________________________________
    CentOS-virt mailing list
    CentOS-virt at centos.org
    http://lists.centos.org/mailman/listinfo/centos-virt
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101020/6788a027/attachment.html
  • Grant McWilliams at Oct 20, 2010 at 7:01 pm

    On Wed, Oct 20, 2010 at 6:24 AM, Tom Bishop wrote:

    Ok so I'd like to help, since most folks have Intel Chipsets, I have a AMD
    4p(16 core)/32gig memory opteron server that I'm running that we can get
    some numbers on....but it would be nice if we could run apples to apples...I
    have iozone loaded and can run that but would be nice to run using the same
    parameters....is there any way we could list the types of test we would like
    to run and the actual command with options listed and then we would have
    some thing to compare at least level the playing field...KB, any thoughts,
    is this a good idea?

    On Wed, Oct 20, 2010 at 6:52 AM, Karanbir Singh wrote:
    On 10/20/2010 12:35 PM, Dennis Jacobfeuerborn wrote:
    Being skeptical is the best approach in the absence of
    verifiable/falsifiable data. Today or tomorrow I'll get my hands on a new
    host system and although it is supposed to go into production
    immediately I
    will probably find some time to do some rudimentary benchmarking in that
    regard to see if this is worth investigating further. Right now I'm
    That sounds great. I've got a machine coming online in the next few days
    as well and will do some testing on there. Its got 2 of these :

    Intel(R) Xeon(R) CPU E5310

    So not the newest/greatest, but should be fairly representative.
    planning to use fio for block device measurements but don't know any decent
    (and uncomplicated) network i/o benchmarking tools. Any ideas what tools I
    could use to quickly get some useful data on this from the machine?
    iozone and openssl speed tests are always a good thing to run as a 'warm
    up' to your app level testing. Since pgtest has been posted here
    already, I'd say that is definitely one thing to include so it creates a
    level of common-code-testing and comparison. mysql-bench is worth
    hitting as well. I have a personal interest in web app delivery, so a
    apache-bench hosted from an external machine hitting domU's / VM's ( but
    more than 1 instance, and hitting more than 1 VM / domU at the same time
    ) would be good to have as well.

    And yes, publish lots of machine details and also details on the code /
    platform / versions used. I will try to do the same ( but will limit my
    testing to whats already available in the distro )

    thanks

    - KB
    ______
    So what we're on the verge of doing here is creating a test set... I'd love
    to see a shell script that ran a bunch of tests, gathered data about the
    system and then created an archive that would then be uploaded to a website
    which created graphs. Dreaming maybe but it would be consistent. So what
    goes in our testset?

    Just a generic list, add to or take away form it..


    - phoronix test suite ?
    - iozone
    - kernbench
    - dbench
    - bonnie++
    - iperf
    - nbench


    The phoronix test suite has most tests in it in addition to many many
    others. Maybe a subset of those tests with the aim of testing Virtualization
    would be good?

    Grant McWilliams
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101020/1ca3a455/attachment.html
  • Kelvin Edmison at Oct 20, 2010 at 7:45 pm
    On 20/10/10 7:01 PM, "Grant McWilliams" wrote:
    On Wed, Oct 20, 2010 at 6:24 AM, Tom Bishop wrote:
    Ok so I'd like to help, since most folks have Intel Chipsets, I have a AMD
    4p(16 core)/32gig memory opteron server that I'm running that we can get some
    numbers on....but it would be nice if we could run apples to apples...I have
    iozone loaded and can run that but would be nice to run using the same
    parameters....is there any way we could list the types of test we would like
    to run and the actual command with options listed and then we would have some
    thing to compare at least? level the playing field...KB, any thoughts, is
    this a good idea?

    On Wed, Oct 20, 2010 at 6:52 AM, Karanbir Singh wrote:
    On 10/20/2010 12:35 PM, Dennis Jacobfeuerborn wrote:
    Being skeptical is the best approach in the absence of
    verifiable/falsifiable data. Today or tomorrow I'll get my hands on a new
    host system and although it is supposed to go into production immediately I
    will probably find some time to do some rudimentary benchmarking in that
    regard to see if this is worth investigating further. Right now I'm
    That sounds great. I've got a machine coming online in the next few days
    as well and will do some testing on there. Its got 2 of these :

    Intel(R) Xeon(R) CPU E5310

    So not the newest/greatest, but should be fairly representative.
    planning to use fio for block device measurements but don't know any decent
    (and uncomplicated) network i/o benchmarking tools. Any ideas what tools I
    could use to quickly get some useful data on this from the machine?
    iozone and openssl speed tests are always a good thing to run as a 'warm
    up' to your app level testing. Since pgtest has been posted here
    already, I'd say that is definitely one thing to include so it creates a
    level of common-code-testing and comparison. mysql-bench is worth
    hitting as well. I have a personal interest in web app delivery, so a
    apache-bench hosted from an external machine hitting domU's / VM's ( but
    more than 1 instance, and hitting more than 1 VM / domU at the same time
    ) would be good to have as well.

    And yes, publish lots of machine details and also details on the code /
    platform / versions used. I will try to do the same ( but will ?limit my
    testing to whats already available in the distro )

    thanks

    - KB
    ______

    So what we're on the verge of doing here is creating a test set... I'd love to
    see a shell script that ran a bunch of tests, gathered data about the system
    and then created an archive that would then be uploaded to a website which
    created graphs. Dreaming maybe but it would be consistent. So what goes in our
    testset?

    Just a generic list, add to or take away form it..

    * phoronix test suite ?
    *
    * iozone
    * kernbench
    * dbench
    * bonnie++
    * iperf
    * nbench

    The phoronix test suite has most tests in it in addition to many many others.
    Maybe a subset of those tests with the aim of testing Virtualization would be
    good?

    Grant McWilliams
    +1 for the Phoronix test suite. I was going to suggest it too.
    http://phoronix-test-suite.com/

    It can publish stats to a central server which the phoronix folks maintain,
    and it records the details of the server on which the test was performed.
    Not sure if it's smart enough to detect a VM though. My experience with it
    has been limited so far but generally positive.

    This isn't my data, but I think it's a good example of how pts can be used
    to compare results from different tests and scenarios.
    http://global.phoronix-test-suite.com/?k=profile&u=justapost-29384-19429-181
    61

    Regards,
    Kelvin
  • Karanbir Singh at Oct 21, 2010 at 6:29 am

    On 10/21/2010 12:01 AM, Grant McWilliams wrote:
    So what we're on the verge of doing here is creating a test set... I'd
    love to see a shell script that ran a bunch of tests, gathered data
    about the system and then created an archive that would then be uploaded
    to a website which created graphs. Dreaming maybe but it would be
    consistent. So what goes in our testset?
    I am trying to create just that - a kickstart that will build a machine
    as a Xen dom0, build 4 domU's, fire up puppet inside the domU's do the
    testing and scp results into a central git repo. Then something similar
    for KVM.

    will get the basic framework online today.

    - KB
  • Grant McWilliams at Oct 21, 2010 at 3:50 pm

    On Thu, Oct 21, 2010 at 3:29 AM, Karanbir Singh wrote:
    On 10/21/2010 12:01 AM, Grant McWilliams wrote:
    So what we're on the verge of doing here is creating a test set... I'd
    love to see a shell script that ran a bunch of tests, gathered data
    about the system and then created an archive that would then be uploaded
    to a website which created graphs. Dreaming maybe but it would be
    consistent. So what goes in our testset?
    I am trying to create just that - a kickstart that will build a machine
    as a Xen dom0, build 4 domU's, fire up puppet inside the domU's do the
    testing and scp results into a central git repo. Then something similar
    for KVM.

    will get the basic framework online today.

    - KB
    ______
    Do you suppose you could get it to use Phoronix Test Suite so
    we can start to have measurable stats? We could do the same thing for any VM
    software - even
    the ones that don't allow publishing stats in the EULA...

    I'm also wondering if we should do the whole test suite or a subset.
    Here is the list of tests..

    aio-stress
    apache
    battery-power-usage
    blogbench
    bork
    build-apache
    build-imagemagick
    build-linux-kernel
    build-mplayer
    build-mysql
    build-php
    bullet
    bwfirt
    byte
    c-ray
    cachebench
    compilebench
    compliance-acpi
    compliance-sensors
    compress-7zip
    compress-gzip
    compress-lzma
    compress-pbzip2
    crafty
    dbench
    dcraw
    doom3
    encode-ape
    encode-flac
    encode-mp3
    encode-ogg
    encode-wavpack
    espeak
    et
    etqw-demo-iqc
    etqw-demo
    etqw
    ffmpeg
    fhourstones
    fio
    fs-mark
    gcrypt
    geekbench
    gluxmark
    gmpbench
    gnupg
    graphics-magick
    gtkperf
    hdparm-read
    himeno
    hmmer
    idle-power-usage
    idle
    iozone
    j2dbench
    java-scimark2
    jgfxbat
    john-the-ripper
    juliagpu
    jxrendermark
    lightsmark
    mafft
    mandelbulbgpu
    mandelgpu
    mencoder
    minion
    mrbayes
    n-queens
    nero2d
    network-loopback
    nexuiz-iqc
    nexuiz
    npb
    openarena
    openssl
    opstone-svd
    opstone-svsp
    opstone-vsp
    padman
    pgbench
    phpbench
    postmark
    povray
    prey
    pybench
    pyopencl
    qgears2
    quake4
    ramspeed
    render-bench
    scimark2
    smallpt-gpu
    smallpt
    smokin-guns
    specviewperf10
    specviewperf9
    sqlite
    stream
    stresscpu2
    sudokut
    sunflow
    supertuxkart
    systester
    tachyon
    tiobench
    tremulous
    trislam
    tscp
    ttsiod-renderer
    unigine-heaven
    unigine-sanctuary
    unigine-tropics
    unpack-linux
    urbanterror
    ut2004-demo
    vdrift-fps-monitor
    vdrift
    video-cpu-usage
    video-extensions
    warsow
    wine-cloth
    wine-domino
    wine-fire2
    wine-hdr
    wine-metaballs
    wine-vf2
    wine-water
    x11perf
    x264
    xplane9-iqc
    xplane9
    yafray



    Grant McWilliams

    .
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101021/d737e4f6/attachment.html
  • Grant McWilliams at Oct 21, 2010 at 3:56 pm
    On Thu, Oct 21, 2010 at 12:50 PM, Grant McWilliams wrote:
    On Thu, Oct 21, 2010 at 3:29 AM, Karanbir Singh wrote:
    On 10/21/2010 12:01 AM, Grant McWilliams wrote:
    So what we're on the verge of doing here is creating a test set... I'd
    love to see a shell script that ran a bunch of tests, gathered data
    about the system and then created an archive that would then be uploaded
    to a website which created graphs. Dreaming maybe but it would be
    consistent. So what goes in our testset?
    I am trying to create just that - a kickstart that will build a machine
    as a Xen dom0, build 4 domU's, fire up puppet inside the domU's do the
    testing and scp results into a central git repo. Then something similar
    for KVM.

    will get the basic framework online today.

    - KB
    ______
    Do you suppose you could get it to use Phoronix Test Suite so
    we can start to have measurable stats? We could do the same thing for any
    VM software - even
    the ones that don't allow publishing stats in the EULA...

    I'm also wondering if we should do the whole test suite or a subset.
    Here is the list of tests..
    One thing that I think probably needs to be modified for our needs is a Dom0
    controller to run various tests in each DomU simultaneously then collate the
    date.
    Virtual worlds are more complex than non-virtual ones. Sometimes something
    runs great in on VM but drags when multiple VMs are being used.

    Grant McWilliams
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101021/0390d437/attachment.html
  • Todd Deshane at Oct 25, 2010 at 11:14 am

    On Thu, Oct 21, 2010 at 3:56 PM, Grant McWilliams wrote:


    On Thu, Oct 21, 2010 at 12:50 PM, Grant McWilliams
    wrote:

    On Thu, Oct 21, 2010 at 3:29 AM, Karanbir Singh <mail-lists at karan.org>
    wrote:
    On 10/21/2010 12:01 AM, Grant McWilliams wrote:
    So what we're on the verge of doing here is creating a test set... I'd
    love to see a shell script that ran a bunch of tests, gathered data
    about the system and then created an archive that would then be
    uploaded
    to a website which created graphs. Dreaming maybe but it would be
    consistent. So what goes in our testset?
    I am trying to create just that - a kickstart that will build a machine
    as a Xen dom0, build 4 domU's, fire up puppet inside the domU's do the
    testing and scp results into a central git repo. Then something similar
    for KVM.

    will get the basic framework online today.

    - KB
    ______
    Do you suppose you could get it to use Phoronix Test Suite so
    we can start to have measurable stats? We could do the same thing for any
    VM software - even
    the ones that don't allow publishing stats in the EULA...

    I'm also wondering if we should do the whole test suite or a subset.
    Here is the list of tests..
    One thing that I think probably needs to be modified for our needs is a Dom0
    controller to run various tests in each DomU simultaneously then collate the
    date.
    Virtual worlds are more complex than non-virtual ones. Sometimes something
    runs great in on VM but drags when multiple VMs are being used.
    I was also going to mention that we should look at scalability and
    performance isolation.
    Some references and previous studies here:

    http://todddeshane.net/research/Xen_versus_KVM_20080623.pdf
    http://clarkson.edu/~jnm/publications/isolation_ExpCS_FINALSUBMISSION.pdf
    http://clarkson.edu/~jnm/publications/freenix04-clark.pdf

    Also, is there anybody that has access to or would be able to get
    access to run SPECvirt?
    http://www.spec.org/virt_sc2010/

    Thanks,
    Todd
  • Grant McWilliams at Oct 26, 2010 at 2:38 pm

    One thing that I think probably needs to be modified for our needs is a Dom0
    controller to run various tests in each DomU simultaneously then collate the
    date.
    Virtual worlds are more complex than non-virtual ones. Sometimes something
    runs great in on VM but drags when multiple VMs are being used.
    I was also going to mention that we should look at scalability and
    performance isolation.
    Some references and previous studies here:

    http://todddeshane.net/research/Xen_versus_KVM_20080623.pdf
    http://clarkson.edu/~jnm/publications/isolation_ExpCS_FINALSUBMISSION.pdf<http://clarkson.edu/%7Ejnm/publications/isolation_ExpCS_FINALSUBMISSION.pdf>
    http://clarkson.edu/~jnm/publications/freenix04-clark.pdf<http://clarkson.edu/%7Ejnm/publications/freenix04-clark.pdf>

    Also, is there anybody that has access to or would be able to get
    access to run SPECvirt?
    http://www.spec.org/virt_sc2010/

    Thanks,
    Todd

    <CentOS-virt at centos.org>
    So you've already done a lot of this then Todd? Should we just be making a
    standardized test out of your work?


    Grant McWilliams

    Some people, when confronted with a problem, think "I know, I'll use
    Windows."
    Now they have two problems.
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101026/70dbf013/attachment.html
  • Eric Searcy at Oct 26, 2010 at 4:15 pm

    On Oct 25, 2010, at 8:14 AM, Todd Deshane wrote:

    I was also going to mention that we should look at scalability and
    performance isolation.
    Some references and previous studies here:

    http://todddeshane.net/research/Xen_versus_KVM_20080623.pdf
    http://clarkson.edu/~jnm/publications/isolation_ExpCS_FINALSUBMISSION.pdf
    http://clarkson.edu/~jnm/publications/freenix04-clark.pdf
    I only got as far as the top one. One concern: the nestled comment "We believe that KVM may have performed better than Xen in terms of I/O due to disk caching" makes me skeptical of the value of the results if this wasn't taken into consideration (in other words I think it is a much bigger problem than the aforementioned comment gives credit to, such that it ought to be at least addressed in the concluding remarks) ... for instance if my VM load-outs use all but ~384M of total memory (that being the amount I leave to the host, most of it used) then there's not going to be much extra RAM for memory cache/buffers with on the host side (depending greatly on what vm.swappiness camp you are in). Based on the author's result output [1] (since the VM parameters aren't given in the paper), as relates to a disk-intensive test they in effect gave 2G potential caching to Xen but ~4G to KVM. Based at least on the amount of free memory on my Xen/KVM hosts I don't think this "host memory cache bias" can be assumed to be a bonus trait that would normally be present for KVM. (And of course a cache bias would be even more noticeable in the 256MB Phoronix test and in the 4x128M isolation tests [2] ...)

    [1] http://web2.clarkson.edu/projects/virtualization/benchvm/results/performance/
    [2] http://web2.clarkson.edu/projects/virtualization/benchvm/results/isolation/xen/memory/specweb1/SPECweb_Support.20080614-100931.html

    BTW, I do realize you're pointing out that we should look at scalability and isolation, and here I am just giving critical feedback on a 3 year old paper ... yes you're right those are important! I just want to make sure the tests are fair ;-)

    Eric
  • Todd Deshane at Oct 26, 2010 at 8:49 pm

    On Tue, Oct 26, 2010 at 4:15 PM, Eric Searcy wrote:
    On Oct 25, 2010, at 8:14 AM, Todd Deshane wrote:

    I was also going to mention that we should look at scalability and
    performance isolation.
    Some references and previous studies here:

    http://todddeshane.net/research/Xen_versus_KVM_20080623.pdf
    http://clarkson.edu/~jnm/publications/isolation_ExpCS_FINALSUBMISSION.pdf
    http://clarkson.edu/~jnm/publications/freenix04-clark.pdf
    I only got as far as the top one. ?One concern: the nestled comment "We believe that KVM may have performed better than Xen in terms of I/O due to disk caching" makes me skeptical of the value of the results if this wasn't taken into consideration (in other words I think it is a much bigger problem than the aforementioned comment gives credit to, such that it ought to be at least addressed in the concluding remarks) ... for instance if my VM load-outs use all but ~384M of total memory (that being the amount I leave to the host, most of it used) then there's not going to be much extra RAM for memory cache/buffers with on the host side (depending greatly on what vm.swappiness camp you are in). ?Based on the author's result output [1] (since the VM parameters aren't given in the paper), as relates to a disk-intensive test they in effect gave 2G potential caching to Xen but ~4G to KVM. ?Based at least on the amount of free memory on my Xen/KVM hosts I don't think this "host memor
    ?y cache bias" can be assumed to be a bonus trait that would normally be present for KVM. ?(And of course a cache bias would be even more noticeable in the 256MB Phoronix test and in the 4x128M isolation tests [2] ...)

    [1] http://web2.clarkson.edu/projects/virtualization/benchvm/results/performance/
    [2] http://web2.clarkson.edu/projects/virtualization/benchvm/results/isolation/xen/memory/specweb1/SPECweb_Support.20080614-100931.html

    BTW, I do realize you're pointing out that we should look at scalability and isolation, and here I am just giving critical feedback on a 3 year old paper ... yes you're right those are important! ?I just want to make sure the tests are fair ;-)
    These are old tests now and not necessarily perfect, but they were Xen
    and KVM on the same kernel. KVM very early on and not necessarily in
    its best light. The Xen dom0/domU kernel was also not the best light
    for Xen. The point was to try to compare them on the same kernel. Xen
    and KVM on the same kernel has happened in the form of OpenSUSE's
    forward port of the Xen dom0 kernel with KVM that is already in
    mainline. The pv_ops kernel is not fully mainline, but is getting
    close. Some distros now have Xen dom0 kernels based on the pv_ops
    kernel, which could also run KVM.

    In any case, some updated numbers would be very welcome. And yes,
    taking scalability, performance isolation, and other factors into
    account is important.

    I have been involved in quite a few performance studies over the years
    and I will try to give advice and help as I can.

    On Tue, Oct 26, 2010 at 2:38 PM, Grant McWilliams
    wrote:
    So you've already done a lot of this then Todd? Should we just be making a standardized test out of your work?
    At Clarkson, we tried to build a standardized tool called benchvm as a
    standard, but it has been stuck in an alpha state for awhile
    http://code.google.com/p/benchvm/

    I can dig up the rejected academic paper on the benchvm if people are
    interested, feel free to email me privately.

    I wonder if using things like puppet, the phoronix test suite, etc.
    are a simpler way to go? I guess it all depends on how general a
    benchmarking tool is needed.

    Thanks,
    Todd
  • Grant McWilliams at Oct 27, 2010 at 11:00 am
    These are old tests now and not necessarily perfect, but they were Xen
    and KVM on the same kernel. KVM very early on and not necessarily in
    its best light. The Xen dom0/domU kernel was also not the best light
    for Xen. The point was to try to compare them on the same kernel. Xen
    and KVM on the same kernel has happened in the form of OpenSUSE's
    forward port of the Xen dom0 kernel with KVM that is already in
    mainline. The pv_ops kernel is not fully mainline, but is getting
    close. Some distros now have Xen dom0 kernels based on the pv_ops
    kernel, which could also run KVM.

    I wonder if using things like puppet, the phoronix test suite, etc.
    are a simpler way to go? I guess it all depends on how general a
    benchmarking tool is needed.

    Thanks,
    Todd
    Todd, I think there's more than one way to look at this as well. As Xen
    becomes more of a product and less of an installable package
    it will probably have to be profiled as a product. Say benchmark XCP on
    particular hardware and benchmark RHEL KVM on the same hardware and ESX as
    well.
    It makes sense to benchmark a XEN kernel and a KVM kernel if we have that
    flexibility but that's starting to shrink. Another test that I don't think
    is THAT important anymore is tesing Xen with and without pvops kernels.
    There were some rumors going around that the old 2.6.18 kernel was faster
    than the new pvops. I was going to put together tests and never got to it.
    Not that it makes any difference in the future because the old kernel is
    fast going
    away.

    What I'd like to have is a standardized test with a way of multiple people
    uploading it and comparing results so we can run it on as many systems as
    possible.
    Data correlation could then be done on the data. Currently we have one test
    over here and another over there and the tests never seem to be updated or
    even
    run again to verify results. Maybe none of it matters as the hypervisor
    becomes inconsequential.

    I'm going to look at the tests you've done as soon as time permits.

    Grant McWilliams

    Some people, when confronted with a problem, think "I know, I'll use
    Windows."
    Now they have two problems.
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101027/c1e46f6f/attachment-0001.html
  • Todd Deshane at Oct 28, 2010 at 11:11 am
    Hi Grant,

    On Wed, Oct 27, 2010 at 11:00 AM, Grant McWilliams
    wrote:
    Todd, I think there's more than one way to look at this as well. As Xen
    becomes more of a product and less of an installable package
    it will probably have to be profiled as a product.
    The XCP devs hope that XCP will eventually be available via a package
    install (for example something similar to yum install xcp).
    Say benchmark XCP on
    particular hardware and benchmark RHEL KVM on the same hardware and ESX as
    well.
    It makes sense to benchmark a XEN kernel and a KVM kernel if we have that
    flexibility but that's starting to shrink. Another test that I don't think
    is THAT important anymore is tesing Xen with and without pvops kernels.
    There were some rumors going around that the old 2.6.18 kernel was faster
    than the new pvops. I was going to put together tests and never got to it.
    Not that it makes any difference in the future because the old kernel is
    fast going
    away.
    Yeah the old one is going away, comparing the forward port kernel (for
    example from OpenSUSE) to the new pv_ops one is what we will want to
    do. The pv_ops one may be better or worse under certain loads, but
    unless we test, how will we know? Once we can demonstrate it, the
    pv_ops kernel can be improved as needed too.
    What I'd like to have is a standardized test with a way of multiple people
    uploading it and comparing results so we can run it on as many systems as
    possible.
    Data correlation could then be done on the data. Currently we have one test
    over here and another over there and the tests never seem to be updated or
    even
    run again to verify results. Maybe none of it matters as the hypervisor
    becomes inconsequential.
    Great, yes that is what research at Clarkson University tried to do.
    As far as I know no one at Clarkson is actively working on it though.
    I will check with them when I get a chance though.
    I'm going to look at the tests you've done as soon as time permits.
    What we completed were some basic things. There is still more to test.

    Thanks,
    Todd
  • Pasi Kärkkäinen at Oct 24, 2010 at 3:52 pm

    On Tue, Oct 19, 2010 at 12:47:03PM +0200, Dennis Jacobfeuerborn wrote:
    On 10/19/2010 09:41 AM, Pasi K?rkk?inen wrote:
    On Sat, Oct 16, 2010 at 09:58:15PM +0200, Dennis Jacobfeuerborn wrote:
    On 10/16/2010 08:11 PM, Pasi K?rkk?inen wrote:
    On Thu, Oct 14, 2010 at 02:16:42PM +0100, Bart Swedrowski wrote:
    Hi Karanbir,

    On 14 October 2010 19:59, Karanbir Singhwrote:
    On 10/14/2010 07:48 AM, Tom Bishop wrote:
    I think xen is still on top in terms of performance and features....now
    that is indeed what it 'feels' like, but I'm quite keen on putting some
    numbers on that.
    I have done some testing some time ago on one of the EQ machines that
    I got from hetzner.de. Full spec of the machine was as following:

    * Intel? Core??? i7-920
    * 8 GB DDR3 RAM
    * 2 x 750 GB SATA-II HDD

    It's nothing big but even though results are quite interesting. All
    tests were performed on CentOS 5.5 x86_64 with PostgreSQL 8.4 (from
    CentOS repos).
    Note that 64bit Xen guests should be HVM, not PV, for best performance.
    Xen HVM guests obviously still need to have PV-on-HVM drivers installed.

    32bit Xen guests can be PV.
    Hm, why would HVM be faster than PV for 64 bit guests?
    It's because of the x86_64 architecture, afaik.

    There was some good technical explananation about it,
    but I can't remember the url now.
    In that case I'll have to call this advice extremely bogus and you probably
    should refrain from passing it on. The only way I can see this being true
    is some weird corner case.
    It's not bogus, you can go ask on xen-devel :)

    -- Pasi
  • John L. Magee at Oct 15, 2010 at 9:00 am
    One thing to possibly consider with PostgreSQL performance especially,
    is that when using KVM VMs for some applications, PostgreSQL could be
    run native. This is a viable approach with KVM that could never work
    with Xen.



    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-virt/attachments/20101015/124b52d2/attachment.html
  • Karanbir Singh at Oct 15, 2010 at 5:42 pm

    On 10/15/2010 02:00 PM, John L. Magee wrote:
    One thing to possibly consider with PostgreSQL performance especially,
    is that when using KVM VMs for some applications, PostgreSQL could be
    run native. This is a viable approach with KVM that could never work
    with Xen.
    Can you expand on this a little bit please ?

    - KB
  • Compdoc at Oct 15, 2010 at 5:56 pm
    I think he's right. Run PostgreSQL on the centos host directly, rather than
    from within a guest. The vm guests could access the database over the
    virtual lan, so speed of access for guests on the same server wouldn't be an
    issue.

    There are lots of ways of file sharing for example. You can share from
    within a linux or windows guest, or you could share directly from the centos
    host with samba or iSCSI.

    I get native speeds from guests, but I think running directly from the
    server is always going to be faster.
  • Karanbir Singh at Oct 15, 2010 at 6:03 pm

    On 10/15/2010 10:56 PM, compdoc wrote:
    I think he's right. Run PostgreSQL on the centos host directly, rather than
    from within a guest. The vm guests could access the database over the
    virtual lan, so speed of access for guests on the same server wouldn't be an
    issue.
    I don't understand why that would have an issue with Xen, quite a lot of
    hosting companies run mysql on the dom0's and let all the VM's hosted on
    the box access it over a socket.

    - KB

Related Discussions

People

Translate

site design / logo © 2021 Grokbase