Building URU under Continuous Integration

Open: Collect Technical Information Regarding Building And Testing Uru

Moderator: Building & Testing Managers

Post Reply
User avatar
rarified
Member
Posts: 1061
Joined: Tue Dec 16, 2008 10:48 pm
Location: Colorado, US

Building URU under Continuous Integration

Post by rarified »

This topic will get a little technical...

I've been working on and off for a couple of weeks to put together an environment for building URU code in a Continuous Integration process. While we don't have any URU code yet, I have been working to put together a demonstration setup with some other URU related open source software to show how it would work.

Goals: Continuous integration is a paradigm for revealing problems with code that can be discovered as soon as possible after a change has been made to the code. Assuming the code is under some for of Version Control System such as Subversion, a CI environment would monitor the code base for changes. As soon as changes are put into the code repository, the CI system would notice the change and initiate a series of tasks to build and test the code. Any errors that were discovered during either the build or testing tasks would be noted in a historical record (and if desired, notification can be performed via email, IM, etc). If the build was successful, the results of the build are then made available for a larger audience to test or use.

How it is done. Usually a CI process is governed by an engine, which is a program designed to monitor one or more source repositories, obtain code from the repositories, and initiate the tasks needed to build and test. In my environment I've chosen Hudson as the CI engine driving the URU build and test processes. Hudson sits in the background periodically checking the status of a source repository (it is compatible with a large number of VCS systems). When it notices changes it will check out the changes to a local copy of the repository. It then schedules tasks to build that code on the appropriate types of machines and operating systems (using a slave process on a machine running that environment to do the actual work). After the slave(s) complete Hudson collates the results and prepares a web page documenting the tasks performed and where the results are.

The environment I've set up makes liberal use of Virtual Machines so it all can be run on one physical server. I'll put together a drawing later giving a pretty picture of the current setup, but we start with a server-class machine running a flavor of Unix called Solaris. Within this machine an instance of Hudson is running. It has a set of projects configured that is instructed to monitor, build and test. Each project has one or more repositories associated where the code is running. A local copy of the code is then kept on this server.

In addition to Hudson, on this machine there are two Virtual Machines running VirtualBox. Each of these "virtual" machines has it's own copy of an operating system running, along with a set of build and test tools. (In my environment these virtual machines run Windows and Ubuntu Linux.) Also on each virtual machine is a slave hudson process which is listening for instructions from the master Hudson on the enclosing Unix environment.

In the demonstration setup, I've used the new libPlasma library that Zrax has made available over at GoW. libPlasma (and it's associated PlasmaShop tool) are C++ programs that Zrax has stored in a subversion repository. I've configured a project which knows where the libPlasma repository is, and what steps are needed to build a copy. At this point there is no test code for checking libPlasma so there are no test tasks involved in a build.

The Hudson engine polls Zrax's repository about once an hour. Any time it discovers a change to the master repository, it will pull the changes into the local Hudson repository. It then will create two seperate build areas with the code from the local repository, and tell the slave hudson processes on the two virtual machines to start building libPlasma. After the build finishes, the state (success or failure) of the build is recorded in a database and the results are available for viewing at a web page. If so instructed the binary programs produced by the build can be made accessible through the web interface.

Current status: The fundamental infrastructure is up and running. You can view the Hudson status webpage here (either browse anonymously or log in with user guest password guest). Current builds of libPlasma are failing due to my still discovering tool settings and prerequisites needed for building libPlasma. The same will have to be done once we get URU code. But you should be able to see progress as I fix the remaining issues so that every time a change is made to the libPlasma code, a new version automatically will be built.

I'll be happy to entertain requests to build other URU related code in this foundry environment I've put together. Just send me a PM.

I'll post more as I finish getting the demo running reliably. Then we'll just have to wait from word from Cyan on what is needed to build the URU code.
One of the OpenUru toolsmiths... a bookbinder.
DarK
Member
Posts: 49
Joined: Fri Dec 26, 2008 2:04 pm

Re: Building URU under Continuous Integration

Post by DarK »

I'm guessing the system can export builds to external WAN Machines as well allowing you to build a net of compilers which distribute either projects or the build load of an entire project ?!

Do you think it will be possible to make sure all, in use, uru software comes from this platform ?
User avatar
rarified
Member
Posts: 1061
Joined: Tue Dec 16, 2008 10:48 pm
Location: Colorado, US

Re: Building URU under Continuous Integration

Post by rarified »

DarK wrote:I'm guessing the system can export builds to external WAN Machines as well allowing you to build a net of compilers which distribute either projects or the build load of an entire project ?!
Yes, the CI engine could make use of non-local slave systems to perform tasks. But if I were to architect such a configuration I would need to see an net advantage in doing so.

By distributing the tasks, you can make use of additional compute resources to speed up a compute bound task. But you also need to get any data needed to perform that task to all the remote nodes (such as copies of the source code or tools) and collect the built artifacts after the task is completed. Doing that over a WAN-class interconnection may cost more than keeping the work local.

You also would have to have in place administrative trust among all the participants. The slave nodes trust the master node to give them tasks to perform, and trust that those tasks will not do something detrimental. So a remote node is giving the master node permission to run an arbitrary program on the remote node. Similar trust is required regarding data access and content between nodes. This trust relationship gets harder to manage in a larger cloud than a local environment.

Right now the template I've set up utilizes shared disk storage that is accessible by both the master node and the slave nodes. This was for efficiency considerations -- tasks that were common to building on either Linux or Windows are performed by the master node only once, and both the Windows and Linux node can then make use of the common build area without replicating the same task multiple times. Shared storage would be a problem to do efficiently across a WAN cloud.
Do you think it will be possible to make sure all, in use, uru software comes from this platform ?
Nope. :mrgreen: But that is because of the diversity of our community. We have many people and groups with different goals and projects in mind. Each of those will produce their own "version" of software.

That doesn't preclude however that people will find it attractive to obtain their copies of URU programs from a repository that is reliably and consistantly built. And that is as it should be; the decision should be made on how well the source fulfills the need.

That being said, anyone is welcome to either use my implementation of this process, or replicate this setup in their own environment to build a copy of URU. And if we get everyone doing the build the same way, we've eliminated a potential source of unintended differences in the behavior of the generated programs. And thats a good thing(tm).
One of the OpenUru toolsmiths... a bookbinder.
User avatar
JWPlatt
Member
Posts: 1137
Joined: Sun Dec 07, 2008 7:32 pm
Location: Everywhere, all at once

Re: Building URU under Continuous Integration

Post by JWPlatt »

There is a Mantis plugin which "integrates Mantis to Hudson. It decorates Hudson 'Changes' HTML to create links to Mantis issues, and update issues with private / public notes."

Plugin
http://hudson.gotdns.com/wiki/display/H ... tis+Plugin

Hudson
http://hudson.gotdns.com/wiki/display/H ... eet+Hudson

What do you think?
Perfect speed is being there.
User avatar
rarified
Member
Posts: 1061
Joined: Tue Dec 16, 2008 10:48 pm
Location: Colorado, US

Re: Building URU under Continuous Integration

Post by rarified »

It's all part of the master plan (Insert evil laugh :D)
One of the OpenUru toolsmiths... a bookbinder.
User avatar
rarified
Member
Posts: 1061
Joined: Tue Dec 16, 2008 10:48 pm
Location: Colorado, US

Re: Building URU under Continuous Integration

Post by rarified »

I believe that the prototype is now working for both Windows and Linux builds of libPlasma.

We should see new builds triggered by the next time Zrax makes a change to the libPlasma sources, and the results of those builds will be available here.

Also, as promised, here's a broad overview picture of the build system as I implemented it. Someone else who wishes to do something similar need not be as complicated if they're only targeting one architecture and have a dedicated machine to do the builds upon.

Image

The next steps I'll be taking is tuning Doxygen to generate documentation of the C++ and Python code in the repositories, and perhaps tuning the error and warning summary plugins. Then perhaps testing the tie-in to Mantis for automatically filing bugs when the build is brokern.
One of the OpenUru toolsmiths... a bookbinder.
User avatar
T_S_Kimball
Member
Posts: 27
Joined: Sun Dec 21, 2008 2:05 am
Location: www.mysterium.net
Contact:

Re: Building URU under Continuous Integration

Post by T_S_Kimball »

I find it interesting that the Windows build is over an hour, when the Linux build is 5 minutes. Is this related to the Python thread you've opened?

As a long-time Sun admin (who has done some rather wild things with zones), I had an idea what you were planning, but the pic helps a lot. A WAN sync of the underlying disk should be possible through ZFS snapshots, but its not exactly trivial and the destination host would also need to be Solaris/x64.

I assume adding an OpenSolaris Builder (to create final packages in Sun's new format) would not be too hard, other than trying to get it to build in Solaris. ;)

--TSK
Timothy S. Kimball - www.sungak.net
SL - Alan Kiesler (retired) || Eve Online - Alain Kinsella
User avatar
rarified
Member
Posts: 1061
Joined: Tue Dec 16, 2008 10:48 pm
Location: Colorado, US

Re: Building URU under Continuous Integration

Post by rarified »

T_S_Kimball wrote:I find it interesting that the Windows build is over an hour, when the Linux build is 5 minutes. Is this related to the Python thread you've opened?
No, it's due to a bug in VirtualBox. The Windows workspace is on what VB calls a "shared folder", which is essentially a networkless remote filesystem. Unfortunately VB opens (on the Solaris host) several files for every file Windows opens, in order to translate UTF16 file pathnames to UTF8. Right now the Windows build is I/O bound. If I put the workspace on the local virtual disk (e.g. the C: drive) it's build time is comparable.

At this time I'm content to wait for the VB team to fix the problem. Or I may start building my own fixed copies of VB :D
I assume adding an OpenSolaris Builder (to create final packages in Sun's new format) would not be too hard, other than trying to get it to build in Solaris. ;)
It would take as much time as it takes to do the OS install to get the framework going. As you say, then it's the up to how portable the code turns out to be.
One of the OpenUru toolsmiths... a bookbinder.
User avatar
rarified
Member
Posts: 1061
Joined: Tue Dec 16, 2008 10:48 pm
Location: Colorado, US

Re: Building URU under Continuous Integration

Post by rarified »

T_S_Kimball wrote:A WAN sync of the underlying disk should be possible through ZFS snapshots, but its not exactly trivial and the destination host would also need to be Solaris/x64.
Hmmmm, I'm not sure where you're headed with this...

Were you thinking of how to distribute the results of a build? My expectation is that I'll use one of the Hudson plugins to "push" the built artifacts to a distribution host that has higher outbound bandwidth than my DSL (assuming it would be acceptable to use the results of this build system as what gets distributed for downloads as clients or to install/update the servers).
One of the OpenUru toolsmiths... a bookbinder.
Post Reply

Return to “Building & Testing”