FAQ
hi,


At the moment, we run tests on a nightly basis, only in kvm vm's and the
results dont go very far. I'd like to propose we use the entire QA
bladecenter and run the entire compliment of deploys and tests, then
post them in a status board which has some history ( stashboard ? )


Going one step further, we should run the entire t_functional tests as
well as start incorporating some of the, now stale, t_external tests.


At the moment we do deploys over http and nfs, to local and remote
storage, under Xen, KVM, bare-metal and some virtualbox and some vmware
hypervisors, for i386 and x86_64.


I had a brief chat with Fabian about this yesterday, and we both think
its doable, in the short term. Anyone else have thoughts about this ? or
want to propose reporting mechanisms that might be better in line with
the intended result : to give people a single perspective on the state
of things, as a confidence point. And also to allow people to easily add
tests that are then run nightly, so they can address their own corner cases.


Regards,


--
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc

Search Discussions

  • Alberto Sentieri at Mar 13, 2013 at 5:29 pm
    Sirs:


    I have been seeing a lot of weird things happening lately, I mean, after
    the 500-more package update.


    After the last updates a Centos 6.3 32-bit virtual machine running under
    virtual box 4.2.8 does not boot anymore.


    I have just updated my workstation to "Linux version
    2.6.32-358.2.1.el6.x86_64 (mockbuild at c6b8.bsys.dev.centos.org) (gcc
    version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Wed Mar 13
    00:26:49 UTC 2013" and it unexpectedly locked when I started Thunderbird
    (there was a VM running under Virtualbox when I did it).
    CTRL-ALT-BACKSPACE apparently completed the "locking job".


    Are you noticing that too?


    Thanks,


    Alberto Sentieri





    On 03/13/2013 01:09 PM, Karanbir Singh wrote:
    hi,

    At the moment, we run tests on a nightly basis, only in kvm vm's and the
    results dont go very far. I'd like to propose we use the entire QA
    bladecenter and run the entire compliment of deploys and tests, then
    post them in a status board which has some history ( stashboard ? )

    Going one step further, we should run the entire t_functional tests as
    well as start incorporating some of the, now stale, t_external tests.

    At the moment we do deploys over http and nfs, to local and remote
    storage, under Xen, KVM, bare-metal and some virtualbox and some vmware
    hypervisors, for i386 and x86_64.

    I had a brief chat with Fabian about this yesterday, and we both think
    its doable, in the short term. Anyone else have thoughts about this ? or
    want to propose reporting mechanisms that might be better in line with
    the intended result : to give people a single perspective on the state
    of things, as a confidence point. And also to allow people to easily add
    tests that are then run nightly, so they can address their own corner cases.

    Regards,
  • Madhurranjan Mohaan at Mar 13, 2013 at 6:05 pm
    Hi Karanbir,


    Stashboard sounds good . We could also post messages to a graphite instance
    based on what we metrics we need and it'll give you nice graphs ,especially
    when you want to compare it with other metrics (in case they are
    comparable). +1 for the idea.


    Mamu




    On Wed, Mar 13, 2013 at 10:59 PM, Alberto Sentieri wrote:

    Sirs:

    I have been seeing a lot of weird things happening lately, I mean, after
    the 500-more package update.

    After the last updates a Centos 6.3 32-bit virtual machine running under
    virtual box 4.2.8 does not boot anymore.

    I have just updated my workstation to "Linux version
    2.6.32-358.2.1.el6.x86_64 (mockbuild at c6b8.bsys.dev.centos.org) (gcc
    version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Wed Mar 13
    00:26:49 UTC 2013" and it unexpectedly locked when I started Thunderbird
    (there was a VM running under Virtualbox when I did it).
    CTRL-ALT-BACKSPACE apparently completed the "locking job".

    Are you noticing that too?

    Thanks,

    Alberto Sentieri


    On 03/13/2013 01:09 PM, Karanbir Singh wrote:
    hi,

    At the moment, we run tests on a nightly basis, only in kvm vm's and the
    results dont go very far. I'd like to propose we use the entire QA
    bladecenter and run the entire compliment of deploys and tests, then
    post them in a status board which has some history ( stashboard ? )

    Going one step further, we should run the entire t_functional tests as
    well as start incorporating some of the, now stale, t_external tests.

    At the moment we do deploys over http and nfs, to local and remote
    storage, under Xen, KVM, bare-metal and some virtualbox and some vmware
    hypervisors, for i386 and x86_64.

    I had a brief chat with Fabian about this yesterday, and we both think
    its doable, in the short term. Anyone else have thoughts about this ? or
    want to propose reporting mechanisms that might be better in line with
    the intended result : to give people a single perspective on the state
    of things, as a confidence point. And also to allow people to easily add
    tests that are then run nightly, so they can address their own corner cases.
    Regards,
    _______________________________________________
    CentOS-devel mailing list
    CentOS-devel at centos.org
    http://lists.centos.org/mailman/listinfo/centos-devel
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-devel/attachments/20130313/66ef55a1/attachment.html
  • Karanbir Singh at Mar 14, 2013 at 6:35 pm
    hi mamu,

    On 03/13/2013 06:05 PM, Madhurranjan Mohaan wrote:
    Stashboard sounds good . We could also post messages to a graphite instance
    based on what we metrics we need and it'll give you nice graphs ,especially
    when you want to compare it with other metrics (in case they are
    comparable). +1 for the idea.

    I had a poke around stashboard, and its pretty complicated and
    cumbersome if we want to run it offline (ie. not on appengine ); there
    are some other options as well.


    Having seen the deploy + test scenarious we run, what sort of metrics do
    you think we should shovel graphite way ?


    --
    Karanbir Singh
    +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
    GnuPG Key : http://www.karan.org/publickey.asc
  • Madhurranjan Mohaan at Mar 14, 2013 at 7:08 pm
    Hi Karanbir,


    Thanks for the email . Some initial thoughts.


    *Tests*:
    1. No. of commits across the test suite. Simple things like git log --since
    < > can be pumped into graphite . You'll get an understanding of how many
    people are committing and how many requests are pending .
    2. test suite runs - No of tests that fail in versions 5 and 6 and across
    several of the setups you have. This can be just the number .
    3. We could one per package and details of that
    Eg, package.httpd.tests.success - 4
    package.httpd.tests.failure - 1
    4. Other metrics that you need to identify stability of test suites .


    I have a few things around querying below ( either csv or json )


    *Deploys*:
    1. Success/failures based on platforms and mode of deploy.
    Example: You can dump simple metrics back into graphite with 1 indicating
    success and 0 failure and create a hierarchy as follows: ( This is just an
    example)


    deploy.kvm.64bit.http.success
    deploy.kvm.64bit.nfs.success
    deploy.xen.32bit.http.success
    etc


    and we could write queries to get the Json api to see how deploys across
    kvm(Eg. deploy.kvm.*.*.success) machines have happened over X days and
    maybe alert via Zabbix based on what we think it suitable. Graphite would
    just chart it and we'll have to have something sitting on top of it but
    having said that sending data to graphite( with statsd on top) is simple
    which is a point to consider.


    We've used this behavior for chef client runs across 200 odd machines at
    work and its worked really well for us.


    We can have slightly more fancy UI with graphene showing metrics of your
    choice:
    http://jondot.github.com/graphene/


    Mamu


    ps: I briefly read about stash board after you wrote about it but have not
    tried it .




    On Fri, Mar 15, 2013 at 12:05 AM, Karanbir Singh wrote:

    hi mamu,
    On 03/13/2013 06:05 PM, Madhurranjan Mohaan wrote:
    Stashboard sounds good . We could also post messages to a graphite instance
    based on what we metrics we need and it'll give you nice graphs
    ,especially
    when you want to compare it with other metrics (in case they are
    comparable). +1 for the idea.
    I had a poke around stashboard, and its pretty complicated and
    cumbersome if we want to run it offline (ie. not on appengine ); there
    are some other options as well.

    Having seen the deploy + test scenarious we run, what sort of metrics do
    you think we should shovel graphite way ?

    --
    Karanbir Singh
    +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
    GnuPG Key : http://www.karan.org/publickey.asc
    _______________________________________________
    CentOS-devel mailing list
    CentOS-devel at centos.org
    http://lists.centos.org/mailman/listinfo/centos-devel
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos-devel/attachments/20130315/fa15c976/attachment.html
  • Tru Huynh at Mar 13, 2013 at 6:56 pm

    On Wed, Mar 13, 2013 at 05:09:41PM +0000, Karanbir Singh wrote:
    hi,

    Hi :)

    At the moment, we run tests on a nightly basis, only in kvm vm's and the
    results dont go very far. I'd like to propose we use the entire QA
    bladecenter and run the entire compliment of deploys and tests, then
    post them in a status board which has some history ( stashboard ? )
    or plain static page (not fancy but simple to generate):
    we could do something like http://releng.netbsd.org/

    Going one step further, we should run the entire t_functional tests as
    well as start incorporating some of the, now stale, t_external tests.
    http://releng.netbsd.org/cgi-bin/builds.cgi <- replace with t_*


    my .2 cents

    At the moment we do deploys over http and nfs, to local and remote
    storage, under Xen, KVM, bare-metal and some virtualbox and some vmware
    hypervisors, for i386 and x86_64.
    We could details the hypervisors setup (version, cpu on the hosts, ...)
    -> public wiki page?


    Cheers,


    Tru
    --
    Tru Huynh (mirrors, CentOS i386/x86_64 Package Maintenance)
    http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xBEFA581B
    -------------- next part --------------
    A non-text attachment was scrubbed...
    Name: not available
    Type: application/pgp-signature
    Size: 189 bytes
    Desc: not available
    Url : http://lists.centos.org/pipermail/centos-devel/attachments/20130313/f76f1be8/attachment.bin
  • Christoph Galuschka at Mar 14, 2013 at 5:18 pm

    hi,

    At the moment, we run tests on a nightly basis, only in kvm vm's and the
    results dont go very far. I'd like to propose we use the entire QA
    bladecenter and run the entire compliment of deploys and tests, then
    post them in a status board which has some history ( stashboard ? )

    Going one step further, we should run the entire t_functional tests as
    well as start incorporating some of the, now stale, t_external tests.

    a very good proposal - gets my vote


    Christoph

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcentos-devel @
categoriescentos
postedMar 13, '13 at 5:09p
activeMar 14, '13 at 7:08p
posts7
users5
websitecentos.org
irc#centos

People

Translate

site design / logo © 2021 Grokbase