Project

General

Profile

Wiki » History » Revision 11

Revision 10 (Ted Sume, 08/31/2016 10:31 AM) → Revision 11/13 (Jonathan Smith, 02/22/2017 02:34 AM)

# What is Bilder? 

 {{>toc}} 

 Bilder is a cross-platform (Linux, OS X, Windows), meta-build or package management system applicable to LCFs, such the IBM Blue Gene and the Cray series of computers.    It automatically downloads packages, then configures, builds and installs them. It handles updating and building of a collection of packages and projects that have dependency relationships. When one package is updated, Bilder ensures that its dependents are updated as well. It can install in common areas, so that multiple packages can make use of the same dependency builds. 

 As of January 16, 2012, Bilder handles multiple builds of over 150 packages, with the multiple builds being, e.g., serial, parallel, shared, or static, as needed.    The platforms include Linux, OS X, AIX, and the specialized Linuces found on the IBM Blue Gene P and the Cray XT4.    It handles the compiler sets of gcc, XL, PathScale and PGI. 

 Bilder is not for replacing build systems.    Instead it works with the build systems that come with each package.    It supports packages with builds systems of autotools, CMake, qmake, Distutils, and the one-off builds systems of, e.g., lapack, ATLAS, and PETSc.    In essence, Bilder acts as a repository of build knowledge. 

 ## Bilder Characteristics 

 * Build workflow automation, handling interpackage dependencies, with builds triggered when a dependency has been built. 
 * Uses soft inter-package dependencies: Suppose component A depends on B, and is updated but does not build (fails or is excluded).    Attempt to build A anyway if any other dependency is rebuilt or if A is updated, as the newer A may be consistent with an existing installation of B, or A may be able to build without B. 
 * Integration with version control systems. 
 * Integration with testing. 
 * Support for multiple OSs: Linux, OS X, Windows 
 * Support for multiple compiler sets (gcc, XL, PGI, PathScale, Visual Studio) 
   * LCFs have particular preferred compilers, e.g., for which some libraries have been built 
   * Need to compare performance of code generated by different compilers 
   * Have to use built libraries    (HDF5, Lapack) when possible for performance 
 * Ability to use different underlying package configuration/build systems. 
 * Support for different kinds of builds (e.g., parallel, serial, static, shared) for any package. 
 * Collection of build provenance information, including logging of all steps and notification using emails and dashboards. 
 * Allows disabling the builds of particular packages (e.g., so that a system version will be used). 
 * Parallel (multi-threaded or multi-process) builds of independent builds or packages. 
 * Out of place build and installation: with defaults and also user-specified locations. 
 * Defaults for all parameters on all supported platforms that can be overridden by users. 
 * Integration with the Jenkins continuous integration tool. 
 * Searching for packages within the installation area. 
 * Isolation of general logic from specific logic and data 
   * General logic in top-level Bilder files 
   * Package specific logic and data in package files (the files in the package subdirectory) 
   * Machine specific logic and data in machine files (the files in the machines subdirectory) 
 * Ability to force builds of legacy versions of packages 

 ## What does Bilder not handle? 

 * Installing compilers 
 * Probably much more 

 # Preparing your machine for Bilder 

 * [[Preparing a Windows machine for Bilder]] 
 * [[Preparing a Linux machine for Bilder]] 
 * [[Preparing a Mac machine for Bilder]] 

 Then check out a bilder repo and build. Below are some examples. 

 # EXAMPLE1: Python Packages 

 Build ipython, scipy, tables with one command! This will build these packages and all of their dependencies, which are ipython scipy tables tornado pyzmq pyqt matplotlib hdf5 numexpr setuptools zeromq Cython qt sip numpy Python atlas clapack_cmake chrpath sqlite bzip2 lapack cmake. 

 ~~~~~~ 
 svn checkout http://ice.txcorp.com/svnrepos/code/bilder/pypkgs/trunk pypkgs 
 cd pypkgs 
 ./mkpypkgs.sh 
 ~~~~~~ 

 # EXAMPLE2: VisIt Visual Analysis Package # 

 ![https://wci.llnl.gov/codes/visit/](VisIt) 

 Build the [VisIt](https://wci.llnl.gov/codes/visit/) visualization tool with one command! This will build VisIt and all its dependencies, which are visit Imaging visit_vtk qt mesa hdf5 openmpi zlib cmake bzip2. 

 ~~~~~~ 
 svn checkout http://ice.txcorp.com/svnrepos/code/bilder/visitall/trunk visitall 
 cd visitall 
 ./mkvisitall.sh 
 ~~~~~~ 

 # Getting Bilder 

 Bilder is a set of shell scripts to configure software. All the configure scripts are available from a subversion repository. To access Bilder, enter: 

 ~~~~~~ 
 svn co http://ice.txcorp.com/svnrepos/code/bilder/trunk bilder 
 ~~~~~~ 

 # Configuring Bilder 

 ## Required configuration information  

 Before running Bilder you need to tell it where its configuration information is.    This is a directory.    The value of the environment variable, BILDER_CONFDIR, is set to it.    (E.g., BILDER_CONFDIR=/etc/bilder.) 

 Inside that directory, there must be at least two files.    The first, _bilderrc_, defines a variable, _PACKAGE_REPOS_FILE_, that contains the name of the file containing the repositories to be searched for tarballs for packages to be built.    E.g., 

 ~~~~~~ 
 PACKAGE_REPOS_FILE=${PACKAGE_REPOS_FILE:-"$BILDER_CONFDIR/numpkgssvn.txt"} 
 ~~~~~~ 

 This follows the standard Bilder style, that no variable with a value is overwritten.    This allows the person executing the build instruction to override any variable value on the command line, e.g., using env. 

 The Package Repos File, then contains the repos to be searched for packages, with the format: 

 ~~~~~~ 
     $ cat numpkgssvn.txt  
     #### 
     # 
     # File:      numpkgssvn.sh 
     # 
     # Purpose: List the package repos in the format, 
     #            subdir,method=URL 
     #            Where subdir is the desired location for the repo, 
     #            method = svn to get by svn, empty to get with wget 
     #            URL is the resource locator 
     # 
     # Version: $Id: numpkgssvn.txt 54 2012-04-08 13:52:09Z cary $ 
     # 
     #### 
     PACKAGE_REPO: numpkgs,svn=http://ice.txcorp.com/svnrepos/code/numpkgs/trunk 
 ~~~~~~ 

 Each line starting with _PACKAGE_REPO:_ defines the subdir (in this case numpkgs) into which the packages are put, the method (in this case svn) for getting the packages, and after the equals sign, the URL for the directory containing all packages. 

 For the method (part between the command and the equals sign) of svn, this means that the svn repo will be checked out as empty,  

 ~~~~~~ 
 svn co --depth=empty http://ice.txcorp.com/svnrepos/code/numpkgs/trunk numpkgs 
 ~~~~~~ 

 and packages will be obtained by 

 ~~~~~~ 
 svn up pkgname 
 ~~~~~~ 

 in the numpkgs subdirectory. 

 ## Optional logic in bilderrc  

 It can happen that "hostname -f" does not give the fully qualified hostname for your machine.    In this case, you can define __FQHOSTNAME__ to contain that hostname. 

 You can also find the following three methods: 

 * _bilderGetAuxData_ defines how to get any auxiliary data needed by a package 
 * _bilderFinalAction_ defines a final action (like posting to a dashboard) to be undertaken at the end of a build run 
 * _signInstaller_ to sign any installers that you create during your build 

 ## Optional additional logic  

 You can provide specific logic in domainname files that also define default installation directories and such in files named with the domain name.    Examples are seen in bilder/domains:  

 ~~~~~~ 
 domains$ ls 
 alcf.anl.gov 	 hpc.mil 		 nersc.gov 	 tacc.utexas.edu 
 ~~~~~~ 

 # Running Bilder 

 ## Running Bilder for the Novice User  

 First you will need to check out a ''meta-project'' svn repo that includes the source that you want to build along with the bilder scripts repo. 

 For example, Tech-X maintains the _visitall_ repo, which can be obtained by: 

 ~~~~~~ 
 svn co http://ice.txcorp.com/svnrepos/code/visitall/trunk visitall 
 ~~~~~~ 

 In the bilder'ized project, if there is a script usually named "mk<project>all-default.sh" where <project> is the project name that may be abbreviated (e.g. for visitall the script is mkvisitall.sh), then this is the easiest way to run bilder. The options of a top level "default" Bilder script can be seen by running the script with the -h flag. 

 ~~~~~~ 
 visitall$ ./mkvisitall-default.sh -h 

 Usage: ./mkvisitall-default.sh [WRAPPER OPTIONS] -- [BILDER OPTIONS] 

 This wrapper script calls the main Bilder script with arguments 
 to set default locations for build and installation directories 
 on a domainname basis.    It also handles launching the build in a 
 queue.    This script is also meant to ease the use of non-gfortran 
 compilers.    It uses these values to form arguments for the main 
 bilder script, mkvisitall.sh, which it additionally sends the arguments 
 after the double dash, --. 

 WRAPPER OPTIONS 
   -b <dir> .......... Set builds directory. If not used, bilder chooses a default. 
   -c ................ All-user installations: for non-LCFS, goes into /contrib, 
                         /volatile or /internal, for LCFSs, goes into group areas 
   -C ................ Install in separate tarball and repo install dirs 
 ... 
   -- ................ End processing of args for mkall-default.sh, all remaining 
                         args are passed to the script. 

 If you want to see the command line arguments for mkvisitall.sh, then you 
 should run the following command: 

         ./mkvisitall.sh -h 
 ~~~~~~ 


 For this script to work, you must have defined the location of your Bilder configuration directory in the environment variable, BILDER_CONFDIR.    This will be discussed more in [ConfiguringBilder]. 


 ## Running Bilder for the Advanced User ...  

 In the bilder'ized project, there will be a script named "mk<project>all.sh" where <project> is the project name that may be abbreviated (e.g. for nautilus the script is mkvisitall.sh). The options of a top level Bilder script can be seen by running the script with the -h flag, e.g., 

 ~~~~~~ 
 visitall$ ./mkvisitall.sh -h 
 Usage: ./mkvisitall.sh [options] 
 BILDER OPTIONS 
   -a ................ Update non-subversion packages 
   -A <addl_sp> ...... Add this to the supra search path. 
   -b <build_dir> .... Build in <build_dir>. 
   ... 
 ~~~~~~ 


 ## Notes on Installation Directories and Path Modifications  

 Bilder builds all software, when possible, in ''the build directory'' or    <builddir>, which is specified by the _-b_ flag.    It also unpacks tarballs into this directory before building them. 

 Bilder defines two installation directories, which may be the same. 

 Tarballs are installed in ''the tarball directory'' or \<tarballdir\>, specified by the _-k_ flag. This is the _/contrib_ directory at Tech-X. 

 Code from repositories is installed in ''the repo directory'' or \<repodir\>, the directory specified by the _-i_ flag.    At Tech-X, this is typically _/volatile_ or _/internal_. 

 If only one of the above directories is specified, then the other directory defaults to the specified directory.    If neither directory is specified, then both directories default to _$HOME/software_. 

 During the build process, _/contrib/autotools/bin:/contrib/valgrind/bin:/contrib/mpi/bin:/contrib/hdf5/bin:/contrib/bin:_ is added to the front of the path so that the installed packages are use to build the packages. 


 ## Debugging Bilder Errors  

 Bilder is a set of bash scripts. The [https://ice.txcorp.com/svnrepos/code/bilder/trunk/ trunk version of the scripts] will tell you exactly what bilder is doing if you know bash programming. 


 # Bilder's Build Types 

 The standard builds of Bilder are 

 * ser: static, serial build 
 * par: static, parallel (MPI) build 
 * sersh: shared, serial build 
 * parsh: shared, parallel (MPI) build 
 * pycsh: shared build compatible with the way Python was built 

 The Bilder standard is to install each build in its own directory.    While libtool allows shared and static builds to be done within the same build, cmake generally does not as discussed at [http://www.cmake.org/Wiki/CMake_FAQ#Library_questions].    Further, to do this on Windows, library names have to differ, as otherwise the static library and the shared interface library files would overwrite each other.    So in the end, is it simply easier to install shared and static libraries in their own directories. 

 In all cases, the builds are to be "as complete as possible".    E.g., for HDF5 on Darwin, shared libraries are not supported with fortran.    So in this case, sersh has to disable the fortran libraries.    However, completeness may depend on other criteria.    So, e.g., for trilinos, complete builds are provided, but so are builds that are as complete as possible and compatible with licenses that allow free reuse in commercial products. 

 ## Static builds  

 The static builds provide the most portable builds, as they eliminate or minimize the need to be compatible with any system shared libraries.    The are also the most widely supported.    For Windows, these mean libraries that import the static runtime library (libcmt).    Generally this means that, for Windows, one should not use a static dependency for a shared build of a project, as doing so typically leads to the dreaded runtime conflict, e.g., http://stackoverflow.com/questions/2360084/runtime-library-mis-matches-and-vc-oh-the-misery. 

 ## Shared builds  

 Shared builds allow one to reuse libraries among executables, but then one has the difficulty of finding those libraries at runtime.    This can be particularly difficult when moving an installation from one machine to another or when installing a package.    To minimize these headaches, Bilder, as much as possible, uses rpath on Linux.    However, packages need to figure out how to modify any executables or libraries post-build to make an installer. 

 ## pycsh builds  

 This is a special build that is just a shared build using the compiler that Python was compiled with.    This is generally gcc for Unices and Visual Studio for Windows.    One adds a pycsh build only when the serial compiler is not the compiler used to build Python. 

 # Bilder Hierarchy 

 It is possible to specialize Bilder: per machine, per project and per person. by sourcing file(s) at each level of hierarchy: 

 ## Bilder default settings  

 When no specialization files are used, Bilder uses the default settings for the project. 

 ## By Machine  

 Set of machine files under bilder/machines directory to specify machine specific variables and settings. For example, to build a project on Windows platform with cygwin using Visual Studio 9, we have cygwin.vs9 machine file which sets up the environment as needed by Visual Studio 9. The machine files can be specified by "-m" option. 

 ## By Project  

 Please see [wiki:ConfiguringBilder Configuring Bilder] on how to set up per project configurations. Here, information needed for the project such as where to obtain third party dependency libraries, default installation directories, set the various variables defining where builds should take place, where installations should go, etc. can be specified. 

 ## By Person  

 ### Default settings using .bilddefrc  

 Every person building a project using Bilder can specify his/her own default settings by creating a .bilddefrc file in their home directory. This will be sourced in the mkXYZall-default.sh file to override any other default project settings.  

 ### Settings using .bilderrc  

 Every person building a project using Bilder can specify his/her own settings by creating a .bilderrc file in their home directory. This will be sourced in the mkXYZall.sh file to override any other project settings.  

 ## Per package per person  

 In cases where it is necessary to specify settings per package per person, a XYZ.conf file can be specified in the BILDER_CONFDIR/packages directory. If found, this file will be sourced in the mkXYZ.sh script to override all the other settings. If this file is modified, then Bilder will reconfigure and build the package. 

 # Running Bilder Through The Defaults Scripts 

 The full set of options for Bilder are many, and this gives rise to the potential for mistakes. To facilitate this, we have created the defaultsfcns.sh and mkall-defaults.sh, and then then associated defaults scripts include the latter and execute runBilderCmd: 

 ~~~~~~ 
     $ cat mkfcall-default.sh  
     #!/bin/bash 
     # 
     # Determine (and possibly execute) the default Bilder command 
     # for Facetsall. 
     # 
     # $Id: mkfcall-default.sh 593 2012-03-09 15:26:46Z cary $ 
     # 
     h2.######################################################### 
     #  
     # Set the default variables 
     mydir=`dirname $0` 
     mydir=${mydir:-"."} 
     mydir=`(cd $mydir; pwd -P)` 
     # Where to find configuration info 
     BILDER_CONFDIR=$mydir/bilderconf 
     # Subdir under INSTALL_ROOTDIR where this package is installed 
     PROJECT_INSTSUBDIR=facets 
     source $mydir/bilder/mkall-default.sh 

     # Build the package 
     runBilderCmd 
     res=$? 
     exit $res 
 ~~~~~~ 

 The options, 

 ~~~~~~ 
     $ ./mkfcall-default.sh -h 
     source /Users/cary/projects/facetsall/bilder/runnr/runnrfcns.sh 
     WARNING: runnrGetHostVars unable to determine the domain name. 
     Usage: ./mkfcall-default.sh [options] 
     This script is meant to handle some of the vagaries that occur 
     at LCFs and clusters in large systems (which have complicated file 
     systems) such as those that have high performance scratch systems 
     and NFS mounted home systems.    This script is also meant to ease 
     the use of non-gfortran compilers. 
     OPTIONS 
       -c                common installations: for non-LCFS, goes into /contrib, 
                       /volatile or /internal, for LCFSs, goes into group areas 
       -C                Install in separate tarball and repo install dirs  
                       (internal/volatile) rather than software 
       -E "<options>"    quoted list of extra options to pass to the mk script 
       -f <file>         File that contains extra arguments to pass 
                       Default: .extra_args 
       -F <compiler>     Specify fortran compiler on non-LCF systems 
       -g                Label the gnu builds the same way other builds occur. 
       -H <host name>    use rules for this hostname (carver, surveyor, intrepid) 
       -h                print this message 
       -i                Software directory is labeled with "internal" if '$USER' 
                       is member of internal install list 
       -I                Install in $HOME instead of default location 
                       (projects directory at LCFs, BUILD_ROOTDIR on non-LCFs) 
       -j                Maximum allowed value of the arg of make -j 
       -k                On non-LCFs: Try to find a tarball directory (/contrib) 
                       On LCFs:       Install tarballs (instead of using facetspkgs) 
       -m                force this machine file 
       -n                invoke with a nohup and a redirect output 
       -p                just print the command 
       -q <timelimit>    run in queue if possible, with limit of timelimit time 
       -t                Pass the -t flag to the    mk script (turn on testing) 
       -v <file>         A file containing a list (without commas) of declared 
                       environment variables to be passed to mk*.sh script 
       -w <file>         Specify the name of a file which has a comma-delimited 
                       list of packages not to build (e.g., 
                       plasma_state,nubeam,uedge) Default: .nobuild 
       --                End processing of args for mkall-default.sh, all remaining 
                       args are passed to the script. 
 ~~~~~~ 


 mostly deal with which directory is to be used for installation, what is the time limit for the build, any extra options to be passed to the build, whether on the command line or in a file. 

 An example invocation look like 

 ~~~~~~ 
 mkfcall-default.sh -cin -- -oXZ -E BUILD_ATLAS=true 
 ~~~~~~ 

 which will (c) install in areas common to all users, (i) using the internal rather than the volatile directory for repo installations, (n) in background via nohup, -- what follows are more args for the base script, which are (o) build openmpi if on OS X or Linux, (X) build the newer, experimental packages, (Z) do not invoke the user defined bilderFinalAction method, (E) set this comma delimited list of environment variables, in this case to build Atlas if on Linux or Windows. 

 # Using Jenkins with Bilder 

 ### Setting up Jenkins for use with Bilder ### 

 This set of pages is intended to describe how to set up the Jenkins continuous integration tools for launching Bilder jobs (which then handle the builds and testing). It is not intended to describe the most general way to set up Jenkins, but instead it describes a way that relies on having a Linux master node. 

 ## Starting up a Linux Jenkins master node  

 Install Jenkins using the installation mechanism for your platform.    E.g., see 
 https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+RedHat+distributions. 

 *IMPORTANT:* Before starting Jenkins for the first time: 
 * create the directory where Jenkins will do its builds (known as _JENKINS_HOME_, not to be confused with the Jenkins home directory in /etc/passwd, which is initially set to /var/lib/jenkins, which we will assume here) 
 * set the permissions of the Jenkins build directory (e.g., /home/bilder/jenkins) 
 * Add jenkins to any groups as needed (e.g., contrib, research, xxusers) 
 * modify {{{/etc/sysconfig/jenkins}}} as needed.    Our settings are 


 ~~~~~~ 
 JENKINS_HOME="/home/bilder/jenkins" 
 JENKINS_PORT="8300" 
 JENKINS_AJP_PORT="8309" 
 JENKINS_ARGS="--argumentsRealm.passwd.jenkins=somepassword --argumentsRealm.roles.jenkins=admin" 
 ~~~~~~ 

 (_somepassword_ is not literal.) 

 Create an ssh key for jenkins: 

 ~~~~~~ 
 sudo -u jenkins ssh-keygen 
 ~~~~~~ 

 It cannot have a passphrase. 


 Start the jenkins service: 

 ~~~~~~ 
 sudo service jenkins start 
 ~~~~~~ 


 Set Jenkins to start on boot: 

 ~~~~~~ 
 sudo chkconfig --level 35 jenkins on 
 ~~~~~~ 

 ## Preparing a Unix Jenkins slave node  

 We will have one node prepared to act as a Jenkins slave for now.    For ease, we will create a Unix slave.    Later we will add more slaves. 

 * On the service node, create the user who will run Jenkins. 
 * As that user create the directory where Jenkins will work 
 * Add that user to any groups needed to give it appropriate permissions (e.g., contrib, research, xxusers) 
 * For public-key authentication 
   * Add the public key created above for jenkins to that user's ~/.ssh/authorized_keys 
   * On the master, check that you can do passwordless login by trying: "_sudo -u jenkins ssh jenkins@yourhost_" 
 * For password authentication 
   * Configure /etc/sshd_config to allow password authentication (PasswordAuthentication yes) and restart sshd 

 ## Configuring the Linux Jenkins master node  

 * Open a browser and go to _master.yourdomain:8300_ and log in as admin with the password that you set in the JENKINS_ARGS variable, above. 
 * Go to Manage Jenkins -> Manage Plugins -> Available and install the plugins, 
   * Jenkins cross-platform shell (XShell) 
   * Conditional Build-Step 
   * Matrix Tie Parent 
   * Jenkins build timeout (Build-timeout) 
 * Go to Manage Jenkins -> Manage Users and then use _Create User_ to create the users for your Jenkins installation. Make sure to create an administrative user perhaps yourself). 
 * Go to Manage Jenkins -> Configure system and select/set 
   * Enable security 
   * Jenkins's own user database 
   * _If you wish_, allow users to sign up 
   * Project-based Matrix Authorization Strategy 
       * Add an administrator name with all privileges 
       * Give anonymous user Overall Read (only) 
   * Default user e-mail suffix: e.g., @yourdomain 
   * Sender: jenkins@yourdomain 
 * Go to Manage Jenkins -> Manage Nodes -> New Node 
   * Fill in name 
   * Dumb Slave 
   * You are taken to the configure form: 
       * \# of executors = 1 
       * Remote FS root: what you decided upon when creating the slave 
       * Usage: Leave this machine for tied jobs onlye 
       * Launch methog: Launch slave gents on Unix machines via SSH 
       * Advanced: 
           * Host  
           * Username (jenkins) 

 ## Creating your first Bilder-Jenkins project  

 We will create the first project to build on the master node. Later we will add more nodes. 

 * Go to Jenkins -> New Job 
     * Build multi-configuration project 
 * Set name (here we will do visitall as our example) 
 * Enable project-based security 
     * For open source builds, give Anonymous Job Read and Job Workspace permission 
     * Add user/group as needed 
 * Source Code Management 
     * Subversion 
     * Put in your URL, e.g., https://ice.txcorp.com/svnrepos/code/visitall/trunk 
     * Put in your svn credentials as requested 
 * Build Triggers (examples) 
     * Build Periodically 
         * Enter cron parameters, e.g., 0 20 * * * 
     * Or have this build launched as a post-build step of another build 
 * Configuration Matrix 
     * Add axis -> slaves (is this available before we add nodes?) 
         * Add master and the above unix slave 
 * Build Environment 
     * Abort the build if stuck (if desired) 
         * Enter your timeout 
     * Tie parent build to a node 
         * Select master node 
 * Build 
     * Add build step -> Invoke XShell command 
         * Command line: bilder/bildtrol/unibild -d mkvisitall 
         * Executable is in workspace dir 
 * Post-build Actions 
     * Aggregate downstream test results 
         * Select both 
     * Archive the artifacts (select, see below for settings) 
     * E-mail Notification 
         * Set as desired 


 ## Creating a Windows Slave  

 * Get all tools in place on the slave machine by following the instructions at https://ice.txcorp.com/trac/bilder/wiki/BilderOnWindows 
 * Create the jenkins user account (if not already defined) as an Administrative account and log into the windows machine as the jenkins user 
 * Make sure the slave's Windows name and its domain name are consistent. 
 * Install Java (see http://www.java.com) and update the path to include `C:\Windows\SYSWOW64` if on 64 bit Windows and then `C:\Program Files (x86)\Java\jre6\bin` 
 * Create the build directory (e.g., C:\winsame\jenkins) 
 * Set the owner to that directory to the jenkins user via properties->security->advanced->owner. 
 * Install the Visual C++ redistributables from http://www.microsoft.com/download/en/details.aspx?id=5582 
 * Follow the networking, registry, and security related instructions at https://wiki.jenkins-ci.org/display/JENKINS/Windows+slaves+fail+to+start+via+DCOM 
 * (Cribbing from https://issues.jenkins-ci.org/browse/JENKINS-12820) 
 * Start a web browser on the windows slave and connect to the master jenkins web page. 
 * Manage Jenkins -> Manage Nodes -> New Node 
 * Create a new node (Slave node) 
   * Fill in name, choose it to be the same as the Windows name of the slave 
   * Dumb Slave 
   * In the configure form, set 
     * \# of executors: 1 
     * Remote FS root: the directory created above (e.g., C:\winsame\jenkins) 
     * Usage: Leave this machine for tied jobs only 
     * Launch method: Launch slave agents via java web start 
 * Launch the slave 
 * Press the newly appeared button: Launch by webstart   
 * A pop up window will be visible with a message as "Connected" 
 * In that pop up window click File-> Install as Windows Service 
 * Find jenkins service in the control panel, ensure that the owner is the jenkins user 
 * UNCHECKED: Set startup to Automatic 
 * Return to browser, take slave node off line in jenkins 
 * Set launch method to: Windows slave as a Windows Service 
   * Advanced: 
     * Administrative username (jenkins) ''You may need to type it as computername\jenkins if you get an invalid service account error'' 
     * Password (set as selected in slave setup) 
     * Use Administrator account given above 
 * Relaunch slave node 

 ### Use the Slave on the Master  

 You should now be able to select this slave as a build node. 


 ## Launching Bilder through Jenkins  

 Jenkins runs Bilder through the scripts in the bildtrol subdir using the XShell command. The XShell command, when configured to launch _somescript_, actually invokes _somescript.bat_ on Windows and _somescript_ on unix.    The Bilder _.bat_ scripts simply translate the arguments and use them in a call to _somescript_, which is run through Cygwin. 

 ### Building and testing: jenkinsbild  

 The script, _jenkinsbild_, launches a build from Jenkins using the default scripts.    For this example, we consider building the visualization package _VisIT_, for which the repo is _https://ice.txcorp.com/svnrepos/code/visitall/trunk_.    This repo uses externals to bring in the VisIT source code.    In this 
 case, the simplest XShell command is 

 ~~~~~~ 
 bilder/jenkins/jenkinsbild mkvisitall 
 ~~~~~~ 

 which leads to execution of 

 ~~~~~~ 
 ./mkvisitall-default.sh -t -Ci -b builds-internal    -- -Z -w 7 
 ~~~~~~ 

 which action is described in the rest of the Bilder documentation, but in particular, testing is invoked (-t), packages and repos are installed in separate areas (-C), use the ''internal'' directory for repo installation, do not do any post build action (-Z), and if a build less than 7 days old is found, do not execute the build (-w 7). The arguments after *--* are passed directly to the underlying Bilder script, mkvisitall.sh. 

 The _jenkinsbild_ script has very few options: 


 ~~~~~~ 
 Usage: $0 [options] 
 GENERAL OPTIONS 
   -b ................ Use internal/volatile build directory naming 
   -m ................ Use mingw on cygwin 
   -n ................ Do not add tests 
   -p ................ Print the command only. 
   -s step ..........    Sets which build step: 1 = internal, 2 = volatile. 
   -2 ................ Sets build step to 2. 
 ~~~~~~ 

 At present, the internal/volatile build directory naming is in fact always true.    In this case, the first step (the default) builds in the subdir, *builds-internal*, and the second step (selected with -2 or -s 2) builds in the subdir, *builds-volatile*. Correspondingly, the repo installation directory is the *internal* directory on step 1 and the *volatile* directory on step 2. 

 Using mingw on cygwin (-m) is useful for codes that cannot build with Visual Studio. 

 Not adding the tests is useful in many instances where one is counting on only a few hosts to do testing. 

 The build step (-s2 or -2) will build in *builds-volatile* and install in the volatile directory, but it also determines several options by looking at the email subject of any step-1 build. 

 This is geared towards a general philosophy of having two builds, the stable (or internal) build that is done more rarely, and a volatile build that is done every night. So what is done in step 2 depends on the step 1 result, which can be determined from the email subject file, left behind. There are four cases: 

 * Step 1 did nothing as there was a sufficiently recent build.    Then step 2 does a full build with tests. 
 * Step 1 was fully successful, both builds and tests.    Then step 2 is not executed. 
 * Step 1 builds succeeded, but some tests failed (and so some packages were not installed).    Then step 2 is executed without testing, as that was done in step 1, and this permits installation of the built, but untested packages. 
 * Step 1 builds failed (and so corresponding tests were not attempted). Then step 2 is not executed, as it will fail as well. 

 The error code returned by jenkinsbild for is success (0) if even only the builds succeeded but not the tests. This way the dashboard indicates jenkinsbild build success only. A subsequent job, jenkinstest, determines whether tests passed by examining the email subjects left behind. 

 For either build step, one wants to archive the artifacts listed in bilder/jenkins/jenkinsclean.sh. 
 in order to collect all results of builds and tests and any created installers. 

 ### Posting test results: jenkinstest  


 # Bilder Architecture 

 Bilder has a largely Object Oriented structure, even though it is written in Bash. But like all (even OO) programs, it has a procedural aspect.    Further it is task oriented (as opposed to event driven), with clear start and conclusion. We will break this architecture down into these three aspects: the task flow, the primary objects, and the procedures. 

 ## Task flow  

 Bilder scripts, like mkvisitall.sh, begin by setting some identifying variables, BILDER_NAME, BILDER_PACKAGE, ORBITER_NAME, and then continue by sourcing bildall.sh, which brings in the Bilder infrastructure: initializations of variables and methods used for building, testing, and installing packages. 

 ### Global method definition  

 The file, bildall.sh, brings in all of the global methods by first sourcing runr/runrfcns.sh, which contains the minimal methods for executing builds in job queues and reporting the results. It then obtains all of the more Bilder-specific methods by sourcing bildfcns.sh. These include generic methods for determining the build system, preconfiguring, configuring, building, testing (including running tests and collecting results), and installing. These files are the heart of Bilder, as they do all the heavy lifting. 

 A trivial, but important method is _techo_, which prints output to both stdout and to a log file.    Another is _decho_, which does the same, but only if DEBUG=true, which is set by the option _-d_. 

 ### Option parsing  

 Options are parsed through the sourcing of bildopts.sh, which is sourced by bildall.sh.    It then sets some basic command-line-argument derived variables, such as the installation directories, which it checks for writability. This file, bildopts.sh, has been written in such a way that Bilder-derived scripts (like mkvisitall.sh) can add their own arguments. 

 ### Initialization  

 Initialization is carried out by sourcing of two files, bildinit.sh and bildvars.sh (which are both sourced by bildall.sh). The purpose of bildinit.sh is to handle timing, to clear out indicating variables (like PIDLIST and configFailures), get the Bilder version, and define any path-like environment variables that might get changed in the course of the run. 

 The purpose of bildvars.sh is to determine useful variables for the build.    The first comes from a possible machine file, then by OS (AIX, CYGWIN, Darwin, or Linux; MinGW is a work in progress). Then unset variables are set to default values. These variables contain the compilers for serial (front-end nodes), back-end nodes, parallel, and gcc (as some packages build only with gcc, and the names of the gcc compilers can vary from one system to another).    As well, the flags for all of these compilers are set.  

 There are some packages that are so basic, that bilder defines variables for them.    These include HDF5, the linear algebra libraries (lapack, blas, atlas), and Boost. These definitions allow the locations of these libraries to be defined on a per machine basis. This is needed particularly for LCFs, which have special builds of HDF5, BLAS, and LAPACK, and for CYGWIN, which must have Boost to make up for deficiencies in the Visual Studio compilers. 

 Finally, bildvars.sh prints out all of the determined values. 

 ### Package building  

 A Bilder-derived script, like _mkvisitall.sh_, after sourcing _bildall.sh_, specifies targets.    Bilder then uses the information in the package files to determine the dependencies, after which Bilder builds the packages in the order from least to most dependent, which all builds for a package occurring in parallel. 

 One can write special package files for building multiple packages simultaneously.    That feature is rarely used so we will not discuss it further. 

 This ability to have multiple builds of possibly multiple, non-interdependent packages simultaneously is a key feature of Bilder. It leads to great savings in time, especially with packages that must be built in serial due to a lack of dependency determination. 

 ### Concluding  

 The last part of the task flow is to install the configuration files, to summarize the build, and to email and post log files, build files, and the summary. The configuration files, which are created by _createConfigFiles_ and installed by _installConfigFiles_ into the installation directory, contain the necessary additions to environment variables to pick up the installed software. 

 The method, _finish_, then does the remaining tasks. It creates the summary file and emails it to the contact specified by the option parsing.    It then posts all log and build files to Orbiter. 

 ## Package files  

 Package files define how a package is acquired, how it is configured for building on the particular platform for all builds, how all builds are done, and how they are all installed.    Here we introduce an important distinction: the **tarball packages** are those obtained in the tar.gz format; the **repo packages** are obtained from a Subversion source code repo. Generic tarball packages are found in the Tech-X maintained Subversion repo at https://ice.txcorp.com/svnrepos/code/numpkgs and are available by anonymous svn. The repo packages are typically svn externals to a Bilder project, e.g., for visitall 

 ~~~~~~ 
 visitall$ svn pg svn:externals . 
 bilder http://ice.txcorp.com/svnrepos/code/bilder/trunk 
 bilderconf http://ice.txcorp.com/svnrepos/code/bilder/bilderconf/trunk 
 visit http://portal.nersc.gov/svn/visit/trunk/src 
 visitwindows/distribution http://portal.nersc.gov/svn/visit/trunk/windowsbuild/distribution 
 visittest/data http://portal.nersc.gov/svn/visit/trunk/data 
 visittest/test http://portal.nersc.gov/svn/visit/trunk/test 
 ~~~~~~ 

 Though written in _Bash_, Bilder uses object concepts. Each package file under packages acts an object, with instantiation, exposed (public) data members, private data members, and methods.    As in OO, these **package-build** objects have the same data members and a common interface. 

 Instantiation is carried out by sourcing the package file. At this point, the data associated with that package is initialized as necessary. 

 ### Package-build data  

 The public data members for a package _PKG__ are 

 ~~~~~~ 
 PKG_BLDRVERSION # Either the version to install or the 
                 # version from the code repository 
 PKG_DEPS          # The dependencies of this package 
 PKG_BUILDS        # The names of the builds for this package 
 PKG_UMASK         # The "umask" that determines the permissions 
                 # for installation of this package 
 ~~~~~~ 

 In the syntax of C++, the first underscore would be represented by '.', i.e., pgk.DEPS. Even dynamic binding can be implemented in _Bash_.    I.e., if one 
 has _pkgname_ that holds the name of a package, one can can extract, e.g., BLDRVERSION via 

 ~~~~~~ 
     vervar=`echo $pkgname | tr 'a-z./-' 'A-Z___'`_BLDRVERSION 
 ~~~~~~ 

 Admittedly, many of these constructs would more easily be accomplished in a language like Python that naturally supports object orientation. The trade-off is that then one does not have the nearly trivial expression of executable invocation or threading that one has in _Bash_. 

 In addition, there are the per-build, public variables _PKG_BUILD_OTHER_ARGS_ (e.g., _FACETS_PAR_OTHER_ARGS_ or _BABEL_STATIC_OTHER_ARGS_. These are added to the command-line when configuring a package.    In some cases, a package has more than one builds system, like HDF5, in which case one has two sets of variables, e.g., _HDF5_PAR_CMAKE_OTHER_ARGS_ and _HDF5_PAR_CONFIG_OTHER_ARGS_. 

 ### Exposed package-build methods  

 All package files are supposed to provide three methods, e.g., _buildPkg_, _testPkg_, and _installPkg_, where "Pkg" is the name of the package being built.    E.g., FACETS has buildFacets, testFacets, installFacets. For untested packages, the second method can simply be empty. 

 The method, _buildPkg_, is supposed to determine whether a package needs to be built.    If so, it should either acquire a tarball package or preconfigure (prepare the build system for) a repo package, then configure the package, and finally launch the builds of the package.    Preconfiguring in the example of an _autotools_ package involves invoking the _autoreconf_ and other executables for creating the various configuration scripts.    In many other cases there is no associated action.    If the Bilder infrastructure is used, then all builds are executed in a separate thread, and at the end of the _buildPkg_ method all the process IDs for these builds have been stored in both the variable PIDLIST, and the particular process ID for build "ser" of package "pkg" is stored in the variable, PKG_SER_PID. 

 The method, _testPkg_, is supposed to determine whether a package is being tested.    If not, it simply returns.    But if the package is being tested, then _testPkg_ executes _wait_ for each build.    Upon successful completion of 
 all builds, the tests are launched.    These are treated just like builds, so the process IDs are stored as in the case of builds. 

 The last method, _installPkg_, in the case of a tested package, waits for the tests to complete, then installs the package if the tests completed successfully, after which is sets the tests as being installed, so that tests will not be run again unless the version or dependencies of the package change.    In the other case, where the package is not being tested, it waits for the builds to complete and installs any successful builds. 

 All three methods for any package are supposed to compensate for any errors or omissions in the build systems. Errors include fixing up library dependencies on Darwin, setting permissions of the installed software, and so forth.  

 The object-oriented analogy is that each package-build object has an interface with three methods.    The syntax translation is _buildPkg_ -> _pkg.build_. 

 ### Private package-build data  

 In the course of its build, any package will generate other variables with values. These are organized on a per-build basis, and so one can think of each package-build object as containing an object for each build of that package.   

 ### Internal objects  

 Builds 

 Tests 

 ###    Combined package objects  

 ###    Future directions 


 # Linear Algebra Libraries in Bilder 

 There are a wide variety of ways to get LAPACK and BLAS: Netlib's libraries (reference LAPACK and BLAS), CLapack (for when one does not have a Fortran compilers), ATLAS (for cpu-tuned libraries), GOTOBLAS (from TACC), and system libraries (MKL, ACML). 

 For numpy and all things that depend on it, Bilder uses ATLAS (if it has been built), and otherwise it uses LAPACK. 

 For non Python packages, the current logic is 

 ## Darwin  

 Always use -framework Accelerate 

 ## Linux and Windows  

 * SYSTEM_LAPACK_SER_LIB and SYSTEM_BLAS_SER_LIB are used if set. 
 * Otherwise, if USE_ATLAS is true, then ATLAS is used. 
 * Otherwise, use Netlib LAPACK if that is found. 
 * Otherwise 
     * If on Windows, use CLAPACK 
     * If on Linux, use any system blas/lapack 

 The results of the search are put into the variables, CMAKE_LINLIB_SER_ARGS, CONFIG_LINLIB_SER_ARGS, LINLIB_SER_LIBS. 


 # Extending Bilder 

 Bilder builds packages using the general logic in bildfcns.sh, the operating-system logic in bildvars.sh, 
 logic for a particular package in the _Bilder package file_ (kept in the packages subdir), logic for a 
 particular machine in the _Bilder machine file_ (kept in the machines subdir), and additional settings for a particular package on a particular machine in the Bilder machine file.    To extend Bilder, one adds the files that introduce the particular logic for a package or a machine. 

 * [[Adding support for a new package]] 
 * [[Adding support for a new machine or operating system/compiler combination]] 

 # Debugging your builds 

 This section describes some of the things that can go wrong and explains how to fix them. 

 # Bilder With IDEs 

 This is a page to collect notes on using IDEs to develop code while at the same time using bilder to build the project. 

 * Reference for using Eclipse with CMake, http://www.vtk.org/Wiki/CMake:Eclipse_UNIX_Tutorial