Project

General

Profile

Wiki » History » Version 5

Redmine Admin, 11/19/2015 01:18 PM

1 1 Redmine Admin
# What is Bilder?
2
3
{{>toc}}
4
5
Bilder is a cross-platform (Linux, OS X, Windows), meta-build or package management system applicable to LCFs, such the IBM Blue Gene and the Cray series of computers.  It automatically downloads packages, then configures, builds and installs them. It handles updating and building of a collection of packages and projects that have dependency relationships. When one package is updated, Bilder ensures that its dependents are updated as well. It can install in common areas, so that multiple packages can make use of the same dependency builds.
6
7
As of January 16, 2012, Bilder handles multiple builds of over 150 packages, with the multiple builds being, e.g., serial, parallel, shared, or static, as needed.  The platforms include Linux, OS X, AIX, and the specialized Linuces found on the IBM Blue Gene P and the Cray XT4.  It handles the compiler sets of gcc, XL, PathScale and PGI.
8
9
Bilder is not for replacing build systems.  Instead it works with the build systems that come with each package.  It supports packages with builds systems of autotools, CMake, qmake, Distutils, and the one-off builds systems of, e.g., lapack, ATLAS, and PETSc.  In essence, Bilder acts as a repository of build knowledge.
10
11
## Bilder Characteristics
12
13
* Build workflow automation, handling interpackage dependencies, with builds triggered when a dependency has been built.
14
* Uses soft inter-package dependencies: Suppose component A depends on B, and is updated but does not build (fails or is excluded).  Attempt to build A anyway if any other dependency is rebuilt or if A is updated, as the newer A may be consistent with an existing installation of B, or A may be able to build without B.
15
* Integration with version control systems.
16
* Integration with testing.
17
* Support for multiple OSs: Linux, OS X, Windows
18
* Support for multiple compiler sets (gcc, XL, PGI, PathScale, Visual Studio)
19 3 Redmine Admin
  * LCFs have particular preferred compilers, e.g., for which some libraries have been built
20
  * Need to compare performance of code generated by different compilers
21
  * Have to use built libraries  (HDF5, Lapack) when possible for performance
22 1 Redmine Admin
* Ability to use different underlying package configuration/build systems.
23
* Support for different kinds of builds (e.g., parallel, serial, static, shared) for any package.
24
* Collection of build provenance information, including logging of all steps and notification using emails and dashboards.
25
* Allows disabling the builds of particular packages (e.g., so that a system version will be used).
26
* Parallel (multi-threaded or multi-process) builds of independent builds or packages.
27
* Out of place build and installation: with defaults and also user-specified locations.
28
* Defaults for all parameters on all supported platforms that can be overridden by users.
29
* Integration with the Jenkins continuous integration tool.
30
* Searching for packages within the installation area.
31
* Isolation of general logic from specific logic and data
32 3 Redmine Admin
  * General logic in top-level Bilder files
33
  * Package specific logic and data in package files (the files in the package subdirectory)
34
  * Machine specific logic and data in machine files (the files in the machines subdirectory)
35 1 Redmine Admin
36
## What does Bilder not handle?
37
38
* Installing compilers
39
* Probably much more
40
41
# Preparing your machine for Bilder
42
43
* [[Preparing a Windows machine for Bilder]]
44
* [[Preparing a Linux machine for Bilder]]
45
* [[Preparing a Mac machine for Bilder]]
46
47
Then check out a bilder repo and build. Below are some examples.
48
49
# EXAMPLE1: Python Packages
50
51
Build ipython, scipy, tables with one command! This will build these packages and all of their dependencies, which are ipython scipy tables tornado pyzmq pyqt matplotlib hdf5 numexpr setuptools zeromq Cython qt sip numpy Python atlas clapack_cmake chrpath sqlite bzip2 lapack cmake.
52
53
~~~~~~
54
svn checkout https://svn.code.sf.net/p/bilder/code/pypkgs/trunk pypkgs
55
cd pypkgs
56
./mkpypkgs.sh
57
~~~~~~
58
59 2 Tech-X Corporation
# EXAMPLE2: VisIt Visual Analysis Package #
60 1 Redmine Admin
61 2 Tech-X Corporation
![https://wci.llnl.gov/codes/visit/](VisIt)
62
63
Build the [VisIt](https://wci.llnl.gov/codes/visit/) visualization tool with one command! This will build VisIt and all its dependencies, which are visit Imaging visit_vtk qt mesa hdf5 openmpi zlib cmake bzip2.
64 1 Redmine Admin
65
~~~~~~
66
svn checkout https://svn.code.sf.net/p/bilder/code/visitall/trunk visitall
67
cd visitall
68
./mkvisitall.sh
69
~~~~~~
70
71
# Getting Bilder
72
73
Bilder is a set of shell scripts to configure software. All the configure scripts are available from a subversion repository. To access Bilder, enter:
74
75
~~~~~~
76
svn co http://svn.code.sf.net/p/bilder/code/trunk bilder
77
~~~~~~
78
79
# Configuring Bilder
80
81
## Required configuration information 
82
83
Before running Bilder you need to tell it where its configuration information is.  This is a directory.  The value of the environment variable, BILDER_CONFDIR, is set to it.  (E.g., BILDER_CONFDIR=/etc/bilder.)
84
85
Inside that directory, there must be at least two files.  The first, _bilderrc_, defines a variable, _PACKAGE_REPOS_FILE_, that contains the name of the file containing the repositories to be searched for tarballs for packages to be built.  E.g.,
86
87
~~~~~~
88
PACKAGE_REPOS_FILE=${PACKAGE_REPOS_FILE:-"$BILDER_CONFDIR/numpkgssvn.txt"}
89
~~~~~~
90
91
This follows the standard Bilder style, that no variable with a value is overwritten.  This allows the person executing the build instruction to override any variable value on the command line, e.g., using env.
92
93
The Package Repos File, then contains the repos to be searched for packages, with the format:
94
95
~~~~~~
96
    $ cat numpkgssvn.txt 
97
    ####
98
    #
99
    # File:    numpkgssvn.sh
100
    #
101
    # Purpose: List the package repos in the format,
102
    #          subdir,method=URL
103
    #          Where subdir is the desired location for the repo,
104
    #          method = svn to get by svn, empty to get with wget
105
    #          URL is the resource locator
106
    #
107
    # Version: $Id: numpkgssvn.txt 54 2012-04-08 13:52:09Z cary $
108
    #
109
    ####
110
    PACKAGE_REPO: numpkgs,svn=https://ice.txcorp.com/svnrepos/code/numpkgs/trunk
111
~~~~~~
112
113
Each line starting with _PACKAGE_REPO:_ defines the subdir (in this case numpkgs) into which the packages are put, the method (in this case svn) for getting the packages, and after the equals sign, the URL for the directory containing all packages.
114
115
For the method (part between the command and the equals sign) of svn, this means that the svn repo will be checked out as empty, 
116
117
~~~~~~
118
svn co --depth=empty https://ice.txcorp.com/svnrepos/code/numpkgs/trunk numpkgs
119
~~~~~~
120
121
and packages will be obtained by
122
123
~~~~~~
124
svn up pkgname
125
~~~~~~
126
127
in the numpkgs subdirectory.
128
129
## Optional logic in bilderrc 
130
131
It can happen that "hostname -f" does not give the fully qualified hostname for your machine.  In this case, you can define __FQHOSTNAME__ to contain that hostname.
132
133
You can also find the following three methods:
134
135
* _bilderGetAuxData_ defines how to get any auxiliary data needed by a package
136
* _bilderFinalAction_ defines a final action (like posting to a dashboard) to be undertaken at the end of a build run
137
* _signInstaller_ to sign any installers that you create during your build
138
139
## Optional additional logic 
140
141
You can provide specific logic in domainname files that also define default installation directories and such in files named with the domain name.  Examples are seen in bilder/runnr.  E.g., 
142
143
~~~~~~
144
    $ cat nersc.gov 
145
    ##############################################################
146
    ##
147
    ## File:    nersc.gov
148
    ##
149
    ## Purpose: Helper functions for setting variables and queues by domain
150
    ##
151
    ## Version: $Id: nersc.gov 5644 2012-04-02 13:35:02Z cary $
152
    ##
153
    ## /* vim: set filetype=sh : */
154
    ##
155
    ##############################################################
156
    #
157
    # Adjust the auxiliary names:
158
    #   MAILSRVR, INSTALLER_HOST, INSTALLER_ROOTDIR, FQMAILHOST, BLDRHOSTID
159
    #
160
    runnrSetNamesByDomain() {
161
    # Hosts for which FQMAILHOST is not obvious.  Also ensure that an
162
    # install host name is set for all cases.
163
      case $UQHOSTNAME in
164
        cvrsvc[0-9]*)
165
          FQMAILHOST=carver.nersc.gov
166
          ;;
167
        dirac[0-9]*)
168
          FQMAILHOST=dirac.nersc.gov
169
          ;;
170
        freedom[0-9]*)
171
          FQMAILHOST=freedom.nersc.gov
172
          RUNNRSYSTEM=XT4
173
          ;;
174
        hopper[01][0-9]*)
175
          FQMAILHOST=hopper.nersc.gov
176
          RUNNRSYSTEM=XE6
177
          ;;
178
        nid[0-9]*)
179
          FQMAILHOST=franklin.nersc.gov
180
          RUNNRSYSTEM=XT4
181
          ;;
182
      esac
183
    }
184
    runnrSetNamesByDomain
185
    cat >/dev/null <<EOF  ## (Block comment)
186
    MODULES AT NERSC
187
~~~~~~
188
189
This is an incomplete list of modules that have to be loaded on the machines that use modules.
190
191
~~~~~~
192
FRANKLIN:
193
Currently Loaded Modulefiles:
194
    1) modules/3.1.6.5
195
    2) moab/5.2.5
196
    3) torque/2.4.1b1-snap.200905131530
197
    4) xtpe-barcelona
198
    5) xtpe-target-cnl
199
    6) MySQL/5.0.45
200
    7) xt-service/2.1.50HDB_PS13A
201
    8) xt-libc/2.1.50HDB_PS13A
202
    9) xt-os/2.1.50HDB_PS13A
203
   10) xt-boot/2.1.50HDB_PS13A
204
   11) xt-lustre-ss/2.1.50HDB_PS13A_1.6.5
205
   12) Base-opts/2.1.50HDB_PS13A
206
   13) PrgEnv-gnu/2.1.50HDB_PS13A
207
   14) xt-asyncpe/3.3
208
   15) xt-pe/2.1.50HDB_PS13A
209
   16) xt-mpt/3.5.0
210
   17) xt-libsci/10.4.0
211
   18) gcc/4.4.1
212
   19) java/jdk1.6.0_07
213
   20) python/2.6.2
214
   21) subversion/1.6.4
215
   22) szip/2.1
216
~~~~~~
217
218
~~~~~~
219
HOPPER:
220
Currently Loaded Modulefiles:
221
    1) modules/3.1.6             9) xt-asyncpe/3.4
222
    2) torque/2.4.1b1           10) PrgEnv-pgi/2.2.41
223
    3) moab/5.3.4               11) xtpe-target-cnl
224
    4) pgi/9.0.4                12) eswrap/1.0.5
225
    5) xt-libsci/10.4.0         13) xtpe-shanghai
226
    6) xt-mpt/3.5.0             14) gcc/4.3.3
227
    7) xt-pe/2.2.41             15) java/jdk1.6.0_15
228
    8) xt-sysroot/2.2.20090720  16) szip/2.1
229
~~~~~~
230
231
~~~~~~
232
CARVER:
233
bilder needs to find either a pgi or a gcc module in your modules list.
234
EOF
235
~~~~~~
236
237
~~~~~~
238
    #
239
    # Determine RUNNR_QTARGET, RUNNR_QUEUE, RUNNR_ACCOUNT, RUNNR_PPN
240
    #
241
    runnrSetQInfoByDomain() {
242
      RUNNR_QTARGET=${RUNNR_QTARGET:-"headnode"}
243
      local fqdn
244
      if ! fqdn=`hostname -f 2>/dev/null`; then
245
        fqdn=`hostname`
246
      fi
247
      case $SCRIPT_NAME in
248
        mkfcall | mkfcpkgs)
249
          RUNNR_ACCOUNT=${RUNNR_ACCOUNT:-"m681"}    # FACETS
250
          ;;
251
        mkvpall)
252
          RUNNR_ACCOUNT=${RUNNR_ACCOUNT:-"m778"}    # ComPASS
253
          ;;
254
        *)
255
          RUNNR_ACCOUNT=${RUNNR_ACCOUNT:-"m778"}    # ComPASS
256
          ;;
257
      esac
258
      RUNNR_QUEUE=${RUNNR_QUEUE:-"regular"}
259
      RUNNR_NCPUSVAR=mppwidth
260
    }
261
    runnrSetQInfoByDomain
262
~~~~~~
263
264
~~~~~~
265
    #
266
    # Set default options.  This has to be called after option parsing.
267
    # Should set
268
    #  CONTRIB_ROOTDIR    The root directory for common installations of tarballs
269
    #  INSTALL_ROOTDIR    The root directory for common installations of repos
270
    #  USERINST_ROOTDIR   The root directory for user installations (same for
271
    #                     tarballs and repos)
272
    #  INSTALL_SUBDIR_SFX Added to subdir (software, contrib, volatile, internal)
273
    #                     to complete the installation dir
274
    #  BUILD_ROOTDIR      Where builds are to take place
275
    #  BILDER_ADDL_ARGS   Any additional args to pass to Bilder
276
    #  MACHINEFILE        The machine file to use
277
    #
278
    setBilderHostVars() {
279
      #
280
      # Preliminary variables
281
      #   Determine the compiler and version for machinefile and namespacing
282
      #
283
      local compkey=`modulecmd bash list -t 2>&1 | grep PrgEnv | sed -e 's/^PrgEnv-//' -e 's?/.*??'`
284
      # echo compkey = $compkey
285
      if test -z "$compkey"; then
286
        local comp=
287
        for comp in pgi gcc gnu; do
288
          compkey=`module list -t 2>&1 | grep ^$comp | sed -e 's?/.*$??'`
289
          if test -n "$compkey"; then
290
            break
291
          fi
292
        done
293
      fi
294
      if test -z "$compkey"; then
295
        echo "Cannot determine the compkey.  Quitting."
296
        exit 1
297
      fi
298
      # echo "compkey = $compkey."
299
      case $compkey in
300
        gnu)   compkey=gcc;;
301
        path*) compkey=path;;
302
      esac
303
      echo compkey = $compkey
304
      local compver=`modulecmd bash list -t 2>&1 | grep ^$compkey | sed -e 's?^.*/??'`
305
      local majorminor=`echo $compver | sed -e "s/\(^[^\.]*\.[^\.]*\).*/\1/"`
306
      compver=$majorminor
307
      echo compver = $compver
308
      # echo "Quitting in nersc.gov."; exit
309
~~~~~~
310
311
~~~~~~
312
      # Set the installation and project subdirs
313
      CONTRIB_ROOTDIR=/project/projectdirs/facets
314
      if test -z "$PROJECT_INSTSUBDIR"; then
315
        echo "PROJECT_INSTSUBDIR not set.  Quitting."
316
        exit 1
317
      fi
318
      INSTALL_ROOTDIR=/project/projectdirs/$PROJECT_INSTSUBDIR
319
      local machinedir=$UQMAILHOST
320
      if test $UQMAILHOST = freedom; then
321
        machinedir=franklin
322
      fi
323
      CONTRIB_ROOTDIR=$CONTRIB_ROOTDIR/$machinedir
324
      USERINST_ROOTDIR=$INSTALL_ROOTDIR/$USER/$machinedir
325
      INSTALL_ROOTDIR=$INSTALL_ROOTDIR/$machinedir
326
      INSTALL_SUBDIR_SFX="-$compkey-$compver"
327
~~~~~~
328
329
~~~~~~
330
      # Set the build directory
331
      if test -n "$GSCRATCH"; then
332
        BUILD_ROOTDIR=${BUILD_ROOTDIR:-"$GSCRATCH/builds-${UQHOSTNAME}-$compkey"}
333
      elif test -n "$SCRATCH"; then
334
        BUILD_ROOTDIR=${BUILD_ROOTDIR:-"$SCRATCH/builds-${UQHOSTNAME}-$compkey"}
335
      fi
336
~~~~~~
337
338
~~~~~~
339
      # Add to BILDER_ARGS
340
      BILDER_ADDL_ARGS=-P
341
~~~~~~
342
343
~~~~~~
344
      # Set machine file
345
      case $machinedir in
346
        hopper | franklin) MACHINEFILE=${MACHINEFILE:-"cray.$compkey"};;
347
        *) MACHINEFILE=${MACHINEFILE:-"nersclinux.$compkey"};;
348
      esac
349
    }
350
~~~~~~
351
352
This file may also, as seen above, define the method, setBilderHostVars, which also can set the various variables defining where builds should take place, where installations should go, etc.
353
354
355
# Running Bilder
356
357
## Running Bilder for the Novice User 
358
359
First you will need to check out a ''meta-project'' svn repo that includes the source that you want to build along with the bilder scripts repo.
360
361
For example, Tech-X maintains the _visitall_ repo, which can be obtained by:
362
363
~~~~~~
364
svn co https://ice.txcorp.com/svnrepos/code/visitall/trunk visitall
365
~~~~~~
366
367
368
In the bilder'ized project, if there is a script usually named "mk<project>all-default.sh" where <project> is the project name that may be abbreviated (e.g. for visitall the script is mkvisitall.sh), then this is the easiest way to run bilder. The options of a top level "default" Bilder script can be seen by running the script with the -h flag:
369
370
~~~~~~
371
    $ ./mkvisitall-default.sh -h
372
    source /Users/cary/projects/visitall/bilder/runnr/runnrfcns.sh
373
    Usage: ./mkvisitall-default.sh [options]
374
    This script is meant to handle some of the vagaries that occur at LCFs and
375
    clusters in large systems (which have complicated file systems) such as those 
376
    that have high performance scratch systems and NFS mounted home systems. This 
377
    script is also meant to ease the use of non-gfortran compilers.
378
    OPTIONS
379
    -c              common installations: for non-LCFS, goes into /contrib,
380
                    /volatile or /internal, for LCFSs, goes into group areas
381
    -C              Install in separate tarball and repo install dirs
382
                    (internal/volatile) rather than in one area (software).
383
    -E <env pairs>  Comma-delimited list of environment var=value pair
384
    -f <file>       File that contains extra arguments to pass
385
                    Default: .extra_args
386
    -F <compiler>   Specify fortran compiler on non-LCF systems
387
    -g              Label the gnu builds the same way other builds occur.
388
    -H <host name>  use rules for this hostname (carver, surveyor, intrepid)
389
    -h              print this message
390
    -i              Software directory is labeled with "internal" if '$USER'
391
                    is member of internal install list
392
    -I              Install in $HOME instead of default location
393
                    (projects directory at LCFs, BUILD_ROOTDIR on non-LCFs)
394
    -j              Maximum allowed value of the arg of make -j
395
    -k              On non-LCFs: Try to find a tarball directory (/contrib)
396
                    On LCFs:     Install tarballs (instead of using facetspkgs)
397
    -m              force this machine file
398
    -n              invoke with a nohup and a redirect output
399
    -p              just print the command
400
    -q <timelimit>  run in queue if possible, with limit of timelimit time
401
    -t              Pass the -t flag to the  mk script (turn on testing)
402
    -v <file>       A file containing a list (without commas) of declared
403
                    environment variables to be passed to mk*.sh script
404
    -w <file>       Specify the name of a file which has a comma-delimited
405
                    list of packages not to build (e.g.,
406
                    plasma_state,nubeam,uedge) Default: .nobuild
407
    --              End processing of args for mkall-default.sh, all remaining
408
                    args are passed to the script.
409
~~~~~~
410
411
412
For this script to work, you must have defined the location of your Bilder configuration directory in the environment variable, BILDER_CONFDIR.  This will be discussed more in [ConfiguringBilder].
413
414
415
## Running Bilder for the Advanced User ... 
416
417
In the bilder'ized project, there will be a script named "mk<project>all.sh" where <project> is the project name that may be abbreviated (e.g. for nautilus the script is mkvisitall.sh). The options of a top level Bilder script can be seen by running the script with the -h flag:
418
419
~~~~~~
420
    $ ./mkvisitall.sh -h
421
    /Users/cary/projects/visitall/bilder/runnr/runnrfcns.sh sourced.
422
    Usage: ./mkvisitall.sh [options]
423
    GENERAL OPTIONS
424
      -A <addl_sp>        Add this to the supra search path
425
      -b <build_dir>      Build in <build_dir>
426
      -B <build_type>     CMake build type
427
      -c ............... Configure packages but don't build
428
      -C ............... Create installers
429
      -d ............... Create debug builds (limited package support)
430
      -D ............... Build/install docs
431
      -e <addr>          Email log to specified recipients
432
      -E <env pairs>.... Comma-delimited list of environment var=value pair
433
      -F ............... Force installation of packages that have local
434
                         modifications
435
      -g ............... Allow use of gfortran with version <4.3
436
      -h ............... Print this message
437
      -i <install_dir>   Set comma delimited list of installation directories
438
                         for code in subdirs, expected to be svn repos; install
439
                         in first directory unless command line contains -2,
440
                         in which case install in the last directory.
441
                         <install_dir> defaults to $HOME/software if not set.
442
      -I ............... Install even if tests fail (ignore test results)
443
      -j <n>             Pass arg to make with -j
444
      -k <tarball_dir>   Set installation directory for code in tarballs,
445
                         expected to be found in one of the pkg repo subdirs;
446
                         <tarball_dir> defaults to <install_dir> if not set.
447
      -l <mpi launcher>  The executable that launches an MPI job
448
      -L ............... Directory for logs (if different from build)
449
      -m <hostfile>      File to source for machine specific defs
450
      -M ............... Maximally thread
451
      -o ............... Install openmpi if not on cygwin.
452
      -O ............... Install optional packages = ATLAS, parallel visit, ...
453
      -p <path>          Specify a supra-search-path
454
      -P ............... Force build of python(does not apply to OS X or Windows)
455
      -r ............... Remove other installations of a package upon successful
456
                         installation of that package
457
      -R ............... Build RELEASE (i.e., licensed) version of executable,
458
                         if applicable.
459
      -S ............... Build static
460
      -t ............... Run tests
461
      -u ............... Do "svn up" at start
462
      -U ............... Do not get (direct access or svn up) tarballs
463
      -v ............... Verbose: print debug information from bilder
464
      -w <wait days>      Wait this many days before doing a new installation
465
      -W <disable builds> Build without these packages (comma delimited list)
466
                          e.g., -W nubeam,plasma_state
467
      -X ............... Build experimental (new) versions of packages
468
      -Z ............... Do not execute the final action
469
      -2 ............... Use the second installation directory of the comma
470
                         delimited list.  Causes -FI options.
471
~~~~~~
472
473
474
## Notes on Installation Directories and Path Modifications 
475
476
Bilder builds all software, when possible, in ''the build directory'' or  <builddir>, which is specified by the _-b_ flag.  It also unpacks tarballs into this directory before building them.
477
478
Bilder defines two installation directories, which may be the same.
479
480 5 Redmine Admin
Tarballs are installed in ''the tarball directory'' or \<tarballdir\>, specified by the _-k_ flag. This is the _/contrib_ directory at Tech-X.
481 1 Redmine Admin
482 5 Redmine Admin
Code from repositories is installed in ''the repo directory'' or \<repodir\>, the directory specified by the _-i_ flag.  At Tech-X, this is typically _/volatile_ or _/internal_.
483 1 Redmine Admin
484
If only one of the above directories is specified, then the other directory defaults to the specified directory.  If neither directory is specified, then both directories default to _$HOME/software_.
485
486
During the build process, _/contrib/autotools/bin:/contrib/valgrind/bin:/contrib/mpi/bin:/contrib/hdf5/bin:/contrib/bin:_ is added to the front of the path so that the installed packages are use to build the packages.
487
488
489
## Debugging Bilder Errors 
490
491
Bilder is a set of bash scripts. The [https://ice.txcorp.com/svnrepos/code/bilder/trunk/ trunk version of the scripts] will tell you exactly what bilder is doing if you know bash programming.
492
493
494
# Bilder's Build Types
495
496
The standard builds of Bilder are
497
498
* ser: static, serial build
499
* par: static, parallel (MPI) build
500
* sersh: shared, serial build
501
* parsh: shared, parallel (MPI) build
502
* cc4py: shared build compatible with the way Python was built
503
504
The Bilder standard is to install each build in its own directory.  While libtool allows shared and static builds to be done within the same build, cmake generally does not as discussed at [http://www.cmake.org/Wiki/CMake_FAQ#Library_questions].  Further, to do this on Windows, library names have to differ, as otherwise the static library and the shared interface library files would overwrite each other.  So in the end, is it simply easier to install shared and static libraries in their own directories.
505
506
In all cases, the builds are to be "as complete as possible".  E.g., for HDF5 on Darwin, shared libraries are not supported with fortran.  So in this case, sersh has to disable the fortran libraries.  However, completeness may depend on other criteria.  So, e.g., for trilinos, complete builds are provided, but so are builds that are as complete as possible and compatible with licenses that allow free reuse in commercial products.
507
508
## Static builds 
509
510
The static builds provide the most portable builds, as they eliminate or minimize the need to be compatible with any system shared libraries.  The are also the most widely supported.  For Windows, these mean libraries that import the static runtime library (libcmt).  Generally this means that, for Windows, one should not use a static dependency for a shared build of a project, as doing so typically leads to the dreaded runtime conflict, e.g., http://stackoverflow.com/questions/2360084/runtime-library-mis-matches-and-vc-oh-the-misery.
511
512
## Shared builds 
513
514
Shared builds allow one to reuse libraries among executables, but then one has the difficulty of finding those libraries at runtime.  This can be particularly difficult when moving an installation from one machine to another or when installing a package.  To minimize these headaches, Bilder, as much as possible, uses rpath on Linux.  However, packages need to figure out how to modify any executables or libraries post-build to make an installer.
515
516
## Cc4py builds 
517
518
This is a special build that is just a shared build using the compiler that Python was compiled with.  This is generally gcc for Unices and Visual Studio for Windows.  One adds a cc4py build only when the serial compiler is not the compiler used to build Python.
519
520
# Bilder Hierarchy
521
522
It is possible to specialize Bilder: per machine, per poject and per person. by sourcing file(s) at each level of hierarchy:
523
524
## Bilder default settings 
525
526
When no specialization files are used, Bilder uses the default settings for the project.
527
528
## By Machine 
529
530
Set of machine files under bilder/machines directory to specify machine specific variables and settings. For example, to build a project on Windows platform with cygwin using Visual Studio 9, we have cygwin.vs9 machine file which sets up the environment as needed by Visual Studio 9. The machine files can be specified by "-m" option.
531
532
## By Project 
533
534
Please see [wiki:ConfiguringBilder Configuring Bilder] on how to set up per project configurations. Here, information needed for the project such as where to obtain third party dependency libraries, default installation directories, set the various variables defining where builds should take place, where installations should go, etc. can be specified.
535
536
## By Person 
537
538
### Default settings using .bilddefrc 
539
540
Every person building a project using Bilder can specify his/her own default settings by creating a .bilddefrc file in their home directory. This will be sourced in the mkXYZall-default.sh file to override any other default project settings. 
541
542
### Settings using .bilderrc 
543
544
Every person building a project using Bilder can specify his/her own settings by creating a .bilderrc file in their home directory. This will be sourced in the mkXYZall.sh file to override any other project settings. 
545
546
## Per package per person 
547
548
In cases where it is necessary to specify settings per package per person, a XYZ.conf file can be specified in the BILDER_CONFDIR/packages directory. If found, this file will be sourced in the mkXYZ.sh script to override all the other settings. If this file is modified, then Bilder will reconfigure and build the package.
549
550
# Running Bilder Through The Defaults Scripts
551
552
The full set of options for Bilder are many, and this gives rise to the potential for mistakes. To facilitate this, we have created the defaultsfcns.sh and mkall-defaults.sh, and then then associated defaults scripts include the latter and execute runBilderCmd:
553
554
~~~~~~
555
    $ cat mkfcall-default.sh 
556
    #!/bin/bash
557
    #
558
    # Determine (and possibly execute) the default Bilder command
559
    # for Facetsall.
560
    #
561
    # $Id: mkfcall-default.sh 593 2012-03-09 15:26:46Z cary $
562
    #
563
    h2.#########################################################
564
    # 
565
    # Set the default variables
566
    mydir=`dirname $0`
567
    mydir=${mydir:-"."}
568
    mydir=`(cd $mydir; pwd -P)`
569
    # Where to find configuration info
570
    BILDER_CONFDIR=$mydir/bilderconf
571
    # Subdir under INSTALL_ROOTDIR where this package is installed
572
    PROJECT_INSTSUBDIR=facets
573
    source $mydir/bilder/mkall-default.sh
574
575
    # Build the package
576
    runBilderCmd
577
    res=$?
578
    exit $res
579
~~~~~~
580
581
The options,
582
583
~~~~~~
584
    $ ./mkfcall-default.sh -h
585
    source /Users/cary/projects/facetsall/bilder/runnr/runnrfcns.sh
586
    WARNING: runnrGetHostVars unable to determine the domain name.
587
    Usage: ./mkfcall-default.sh [options]
588
    This script is meant to handle some of the vagaries that occur
589
    at LCFs and clusters in large systems (which have complicated file
590
    systems) such as those that have high performance scratch systems
591
    and NFS mounted home systems.  This script is also meant to ease
592
    the use of non-gfortran compilers.
593
    OPTIONS
594
      -c              common installations: for non-LCFS, goes into /contrib,
595
                      /volatile or /internal, for LCFSs, goes into group areas
596
      -C              Install in separate tarball and repo install dirs 
597
                      (internal/volatile) rather than software
598
      -E "<options>"  quoted list of extra options to pass to the mk script
599
      -f <file>       File that contains extra arguments to pass
600
                      Default: .extra_args
601
      -F <compiler>   Specify fortran compiler on non-LCF systems
602
      -g              Label the gnu builds the same way other builds occur.
603
      -H <host name>  use rules for this hostname (carver, surveyor, intrepid)
604
      -h              print this message
605
      -i              Software directory is labeled with "internal" if '$USER'
606
                      is member of internal install list
607
      -I              Install in $HOME instead of default location
608
                      (projects directory at LCFs, BUILD_ROOTDIR on non-LCFs)
609
      -j              Maximum allowed value of the arg of make -j
610
      -k              On non-LCFs: Try to find a tarball directory (/contrib)
611
                      On LCFs:     Install tarballs (instead of using facetspkgs)
612
      -m              force this machine file
613
      -n              invoke with a nohup and a redirect output
614
      -p              just print the command
615
      -q <timelimit>  run in queue if possible, with limit of timelimit time
616
      -t              Pass the -t flag to the  mk script (turn on testing)
617
      -v <file>       A file containing a list (without commas) of declared
618
                      environment variables to be passed to mk*.sh script
619
      -w <file>       Specify the name of a file which has a comma-delimited
620
                      list of packages not to build (e.g.,
621
                      plasma_state,nubeam,uedge) Default: .nobuild
622
      --              End processing of args for mkall-default.sh, all remaining
623
                      args are passed to the script.
624
~~~~~~
625
626
627
mostly deal with which directory is to be used for installation, what is the time limit for the build, any extra options to be passed to the build, whether on the command line or in a file.
628
629
An example invocation look like
630
631
~~~~~~
632
mkfcall-default.sh -cin -- -oXZ -E BUILD_ATLAS=true
633
~~~~~~
634
635
which will (c) install in areas common to all users, (i) using the internal rather than the volatile directory for repo installations, (n) in background via nohup, -- what follows are more args for the base script, which are (o) build openmpi if on OS X or Linux, (X) build the newer, experimental packages, (Z) do not invoke the user defined bilderFinalAction method, (E) set this comma delimited list of environment variables, in this case to build Atlas if on Linux or Windows.
636
637
# Using Jenkins with Bilder
638
639
### Setting up Jenkins for use with Bilder ###
640
641
This set of pages is intended to describe how to set up the Jenkins continuous integration tools for launching Bilder jobs (which then handle the builds and testing). It is not intended to describe the most general way to set up Jenkins, but instead it describes a way that relies on having a Linux master node.
642
643
644
## Starting up a Linux Jenkins master node 
645
646
Install Jenkins using the installation mechanism for your platform.  E.g., see
647
https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+on+RedHat+distributions.
648
649
*IMPORTANT:* Before starting Jenkins for the first time:
650
* create the directory where Jenkins will do its builds (known as _JENKINS_HOME_, not to be confused with the Jenkins home directory in /etc/passwd, which is initially set to /var/lib/jenkins, which we will assume here)
651
* set the permissions of the Jenkins build directory (e.g., /home/bilder/jenkins)
652
* Add jenkins to any groups as needed (e.g., contrib, research, xxusers)
653
* modify {{{/etc/sysconfig/jenkins}}} as needed.  Our settings are
654
655
656
~~~~~~
657
    JENKINS_HOME="/home/bilder/jenkins"
658
    JENKINS_PORT="8300"
659
    JENKINS_AJP_PORT="8309"
660
    JENKINS_ARGS="--argumentsRealm.passwd.jenkins=somepassword --argumentsRealm.roles.jenkins=admin"
661
~~~~~~
662
663
(_somepassword_ is not literal.)
664
665
Create an ssh key for jenkins:
666
667
668
~~~~~~
669
    sudo -u jenkins ssh-keygen
670
~~~~~~
671
672
673
It cannot have a passphrase.
674
675
676
Start the jenkins service:
677
678
679
~~~~~~
680
    sudo service jenkins start
681
~~~~~~
682
683
684
685
Set Jenkins to start on boot:
686
687
~~~~~~
688
    sudo chkconfig --level 35 jenkins on
689
~~~~~~
690
691
692
693
## Preparing a Unix Jenkins slave node 
694
695
We will have one node prepared to act as a Jenkins slave for now.  For ease, we will create a Unix slave.  Later we will add more slaves.
696
697
* On the service node, create the user who will run Jenkins.
698
* As that user create the directory where Jenkins will work
699
* Add that user to any groups needed to give it appropriate permissions (e.g., contrib, research, xxusers)
700
* For public-key authentication
701 3 Redmine Admin
  * Add the public key created above for jenkins to that user's ~/.ssh/authorized_keys
702
  * On the master, check that you can do passwordless login by trying: "_sudo -u jenkins ssh jenkins@yourhost_"
703 1 Redmine Admin
* For password authentication
704 3 Redmine Admin
  * Configure /etc/sshd_config to allow password authentication (PasswordAuthentication yes) and restart sshd
705 1 Redmine Admin
706
## Configuring the Linux Jenkins master node 
707
708
* Open a browser and go to _master.yourdomain:8300_ and log in as admin with the password that you set in the JENKINS_ARGS variable, above.
709
* Go to Manage Jenkins -> Manage Plugins -> Available and install the plugins,
710 3 Redmine Admin
  * Jenkins cross-platform shell (XShell)
711
  * Conditional Build-Step
712
  * Matrix Tie Parent
713
  * Jenkins build timeout (Build-timeout)
714 1 Redmine Admin
* Go to Manage Jenkins -> Manage Users and then use _Create User_ to create the users for your Jenkins installation. Make sure to create an administrative user perhaps yourself).
715
* Go to Manage Jenkins -> Configure system and select/set
716 3 Redmine Admin
  * Enable security
717
  * Jenkins's own user database
718
  * _If you wish_, allow users to sign up
719
  * Project-based Matrix Authorization Strategy
720 4 Redmine Admin
      * Add an administrator name with all privileges
721
      * Give anonymous user Overall Read (only)
722 3 Redmine Admin
  * Default user e-mail suffix: e.g., @yourdomain
723
  * Sender: jenkins@yourdomain
724 1 Redmine Admin
* Go to Manage Jenkins -> Manage Nodes -> New Node
725 4 Redmine Admin
  * Fill in name
726
  * Dumb Slave
727
  * You are taken to the configure form:
728
      * \# of executors = 1
729
      * Remote FS root: what you decided upon when creating the slave
730
      * Usage: Leave this machine for tied jobs onlye
731
      * Launch methog: Launch slave gents on Unix machines via SSH
732
      * Advanced:
733
          * Host 
734
          * Username (jenkins)
735 1 Redmine Admin
736
## Creating your first Bilder-Jenkins project 
737
738
We will create the first project to build on the master node. Later we will add more nodes.
739
740
* Go to Jenkins -> New Job
741
    * Build multi-configuration project
742
* Set name (here we will do visitall as our example)
743
* Enable project-based security
744
    * For open source builds, give Anonymous Job Read and Job Workspace permission
745
    * Add user/group as needed
746
* Source Code Management
747
    * Subversion
748
    * Put in your URL, e.g., https://ice.txcorp.com/svnrepos/code/visitall/trunk
749
    * Put in your svn credentials as requested
750
* Build Triggers (examples)
751
    * Build Periodically
752
        * Enter cron parameters, e.g., 0 20 * * *
753
    * Or have this build launched as a post-build step of another build
754
* Configuration Matrix
755
    * Add axis -> slaves (is this available before we add nodes?)
756
        * Add master and the above unix slave
757
* Build Environment
758
    * Abort the build if stuck (if desired)
759
        * Enter your timeout
760
    * Tie parent build to a node
761
        * Select master node
762
* Build
763
    * Add build step -> Invoke XShell command
764
        * Command line: bilder/bildtrol/unibild -d mkvisitall
765
        * Executable is in workspace dir
766
* Post-build Actions
767
    * Aggregate downstream test results
768
        * Select both
769
    * Archive the artifacts (select, see below for settings)
770
    * E-mail Notification
771
        * Set as desired
772
773
774
## Creating a Windows Slave 
775
776
* Get all tools in place on the slave machine by following the instructions at https://ice.txcorp.com/trac/bilder/wiki/BilderOnWindows
777
* Create the jenkins user account (if not already defined) as an Administrative account and log into the windows machine as the jenkins user
778
* Make sure the slave's Windows name and its domain name are consistent.
779
* Install Java (see http://www.java.com) and update the path to include `C:\Windows\SYSWOW64` if on 64 bit Windows and then `C:\Program Files (x86)\Java\jre6\bin`
780
* Create the build directory (e.g., C:\winsame\jenkins)
781
* Set the owner to that directory to the jenkins user via properties->security->advanced->owner.
782
* Install the Visual C++ redistributables from http://www.microsoft.com/download/en/details.aspx?id=5582
783
* Follow the networking, registry, and security related instructions at https://wiki.jenkins-ci.org/display/JENKINS/Windows+slaves+fail+to+start+via+DCOM
784
* (Cribbing from https://issues.jenkins-ci.org/browse/JENKINS-12820)
785
* Start a web browser on the windows slave and connect to the master jenkins web page.
786
* Manage Jenkins -> Manage Nodes -> New Node
787
* Create a new node (Slave node)
788
  * Fill in name, choose it to be the same as the Windows name of the slave
789
  * Dumb Slave
790
  * In the configure form, set
791
    * \# of executors: 1
792
    * Remote FS root: the directory created above (e.g., C:\winsame\jenkins)
793
    * Usage: Leave this machine for tied jobs only
794
    * Launch method: Launch slave agents via java web start
795
* Launch the slave
796
* Press the newly appeared button: Launch by webstart  
797
* A pop up window will be visible with a message as "Connected"
798
* In that pop up window click File-> Install as Windows Service
799
* Find jenkins service in the control panel, ensure that the owner is the jenkins user
800
* UNCHECKED: Set startup to Automatic
801
* Return to browser, take slave node off line in jenkins
802
* Set launch method to: Windows slave as a Windows Service
803
  * Advanced:
804
    * Administrative username (jenkins) ''You may need to type it as computername\jenkins if you get an invalid service account error''
805
    * Password (set as selected in slave setup)
806
    * Use Administrator account given above
807
* Relaunch slave node
808
809
### Use the Slave on the Master 
810
811
You should now be able to select this slave as a build node.
812
813
814
## Launching Bilder through Jenkins 
815
816
Jenkins runs Bilder through the scripts in the bildtrol subdir using the XShell command. The XShell command, when configured to launch _somescript_, actually invokes _somescript.bat_ on Windows and _somescript_ on unix.  The Bilder _.bat_ scripts simply translate the arguments and use them in a call to _somescript_, which is run through Cygwin.
817
818
### Building and testing: jenkinsbild 
819
820
The script, _jenkinsbild_, launches a build from Jenkins using the default scripts.  For this example, we consider building the visualization package _VisIT_, for which the repo is _https://ice.txcorp.com/svnrepos/code/visitall/trunk_.  This repo uses externals to bring in the VisIT source code.  In this
821
case, the simplest XShell command is
822
823
~~~~~~
824
    bilder/jenkins/jenkinsbild mkvisitall
825
~~~~~~
826
827
which leads to execution of
828
829
~~~~~~
830
    ./mkvisitall-default.sh -t -Ci -b builds-internal  -- -Z -w 7
831
~~~~~~
832
833
which action is described in the rest of the Bilder documentation, but in particular, testing is invoked (-t), packages and repos are installed in separate areas (-C), use the ''internal'' directory for repo installation, do not do any post build action (-Z), and if a build less than 7 days old is found, do not execute the build (-w 7). The arguments after *--* are passed directly to the underlying Bilder script, mkvisitall.sh.
834
835
The _jenkinsbild_ script has very few options:
836
837
838
~~~~~~
839
    Usage: $0 [options]
840
    GENERAL OPTIONS
841
      -b ................ Use internal/volatile build directory naming
842
      -m ................ Use mingw on cygwin
843
      -n ................ Do not add tests
844
      -p ................ Print the command only.
845
      -s step ..........  Sets which build step: 1 = internal, 2 = volatile.
846
      -2 ................ Sets build step to 2.
847
~~~~~~
848
849
At present, the internal/volatile build directory naming is in fact always true.  In this case, the first step (the default) builds in the subdir, *builds-internal*, and the second step (selected with -2 or -s 2) builds in the subdir, *builds-volatile*. Correspondingly, the repo installation directory is the *internal* directory on step 1 and the *volatile* directory on step 2.
850
851
Using mingw on cygwin (-m) is useful for codes that cannot build with Visual Studio.
852
853
Not adding the tests is useful in many instances where one is counting on only a few hosts to do testing.
854
855
The build step (-s2 or -2) will build in *builds-volatile* and install in the volatile directory, but it also determines several options by looking at the email subject of any step-1 build.
856
857
This is geared towards a general philosophy of having two builds, the stable (or internal) build that is done more rarely, and a volatile build that is done every night. So what is done in step 2 depends on the step 1 result, which can be determined from the email subject file, left behind. There are four cases:
858
859
* Step 1 did nothing as there was a sufficiently recent build.  Then step 2 does a full build with tests.
860
* Step 1 was fully successful, both builds and tests.  Then step 2 is not executed.
861
* Step 1 builds succeeded, but some tests failed (and so some packages were not installed).  Then step 2 is executed without testing, as that was done in step 1, and this permits installation of the built, but untested packages.
862
* Step 1 builds failed (and so corresponding tests were not attempted). Then step 2 is not executed, as it will fail as well.
863
864
The error code returned by jenkinsbild for is success (0) if even only the builds succeeded but not the tests. This way the dashboard indicates jenkinsbild build success only. A subsequent job, jenkinstest, determines whether tests passed by examining the email subjects left behind.
865
866
For either build step, one wants to archive the artifacts,
867
868
~~~~~~
869
mk*all.sh,jenkinsbild.log,builds-*/bilderenv.txt,builds-*/*-summary.txt,\
870
builds-*/*.log,builds-*/*-chain.txt,*/*-preconfig.sh,*/preconfig.txt,\
871
builds-*/*/*/*-config.sh,builds-*/*/*/*-config.txt,\
872
builds-*/*/*/*-build.sh,builds-*/*/*/*-build.txt,\
873
builds-*/*/*/*-test.sh,builds-*/*/*/*-test.txt,\
874
builds-*/*/*/*-submit.sh,builds-*/*/*/*-submit.txt,\
875
builds-*/*/*/*-install.sh,builds-*/*/*/*-install.txt,\
876
*tests/*-config.sh,*tests/*-config.txt,*tests/*-build.sh,\
877
*tests/*-build.txt,*tests/*-install.sh,*tests/*-install.txt,\
878
*tests/runtxtest-*.txt,*tests/*-txtest.log,\
879
builds-*/*/*/*-Darwin-*.dmg,builds-*/*/*/*-win_x??-*.exe,\
880
builds-*/*/*/*-Linux-x86*-*.tar.gz
881
~~~~~~
882
883
in order to collect all results of builds and tests and any created installers.
884
885
### Posting test results: jenkinstest 
886
887
888
# Bilder Architecture
889
890
Bilder has a largely Object Oriented structure, even though it is written in Bash. But like all (even OO) programs, it has a procedural aspect.  Further it is task oriented (as opposed to event driven), with clear start and conclusion. We will break this architecture down into these three aspects: the task flow, the primary objects, and the procedures.
891
892
## Task flow 
893
894
Bilder scripts, like mkvisitall.sh, begin by setting some identifying variables, BILDER_NAME, BILDER_PACKAGE, ORBITER_NAME, and then continue by sourcing bildall.sh, which brings in the Bilder infrastructure: initializations of variables and methods used for building, testing, and installing packages.
895
896
### Global method definition 
897
898
The file, bildall.sh, brings in all of the global methods by first sourcing runr/runrfcns.sh, which contains the minimal methods for executing builds in job queues and reporting the results. It then obtains all of the more Bilder-specific methods by sourcing bildfcns.sh. These include generic methods for determining the build system, preconfiguring, configuring, building, testing (including running tests and collecting results), and installing. These files are the heart of Bilder, as they do all the heavy lifting.
899
900
A trivial, but important method is _techo_, which prints output to both stdout and to a log file.  Another is _decho_, which does the same, but only if DEBUG=true, which is set by the option _-d_.
901
902
### Option parsing 
903
904
Options are parsed through the sourcing of bildopts.sh, which is sourced by bildall.sh.  It then sets some basic command-line-argument derived variables, such as the installation directories, which it checks for writability. This file, bildopts.sh, has been written in such a way that Bilder-derived scripts (like mkvisitall.sh) can add their own arguments.
905
906
### Initialization 
907
908
Initialization is carried out by sourcing of two files, bildinit.sh and bildvars.sh (which are both sourced by bildall.sh). The purpose of bildinit.sh is to handle timing, to clear out indicating variables (like PIDLIST and configFailures), get the Bilder version, and define any path-like environment variables that might get changed in the course of the run.
909
910
The purpose of bildvars.sh is to determine useful variables for the build.  The first comes from a possible machine file, then by OS (AIX, CYGWIN, Darwin, or Linux; MinGW is a work in progress). Then unset variables are set to default
911
values. These variables contain the compilers for serial (front-end nodes), back-end nodes, parallel, and gcc (as some packages build only with gcc, and the names of the gcc compilers can vary from one system to another).  As well,
912
the flags for all of these compilers are set. 
913
914
There are some packages that are so basic, that bilder defines variables for them.  These include HDF5, the linear algebra libraries (lapack, blas, atlas), and Boost. These definitions allow the locations of these libraries to be defined on a per machine basis. This is needed particularly for LCFs, which have special builds of HDF5, BLAS, and LAPACK, and for CYGWIN, which must have Boost to make up for deficiencies in the Visual Studio compilers.
915
916
Finally, bildvars.sh prints out all of the determined values.
917
918
### Package building 
919
920
A Bilder-derived script, like _mkvisitall.sh_, after sourcing _bildall.sh_, then builds packages in groups. In the simplest case, a package is built in a straight-through sequence, like
921
922
~~~~~~
923
    source $BILDER_TOPDIR/bilder/packages/facets.sh
924
    buildFacets
925
    testFacets
926
    installFacets
927
~~~~~~
928
929
(The call to _testFacets_ can be ignored if thepackage is not tested.)  The methods for building, testing, and installing a package are defined in the ppropriate file under the packages subdirectory.
930
931
Bilder, however, has the capability of doing threaded builds, such as in the sequence,
932
933
~~~~~~
934
    source $BILDER_TOPDIR/bilder/packages/trilinos.sh
935
    buildTrilinos
936
    source $BILDER_TOPDIR/bilder/packages/txphysics.sh
937
    buildTxphysics
938
    source $BILDER_TOPDIR/bilder/packages/txbase.sh
939
    buildTxbase
940
    installTxbase
941
    installTxphysics
942
    installTrilinos
943
~~~~~~
944
945
In this case, all of the builds for _Trilinos_, _TxPhysics_, and _TxBase_ are launched and so are occurring simultaneously. Then _installTxbase_ waits for
946
the _TxBase_ build to complete, then it installs it. Then it waits on and installs _TxPhysics_ and _Trilinos_.
947
948
This ability to build multiple, non-interdependent packages simultaneously is a key feature of Bilder. It leads to great savings in time, especially with packages that must be built in serial due to a lack of dependency determination.
949
950
### Concluding 
951
952
The last part of the task flow is to install the configuration files, to summarize the build, and to email and post log files, build files, and the summary. The configuration files, which are created by _createConfigFiles_ and installed by _installConfigFiles_ into the installation directory, contain the necessary additions to environment variables to pick up the installed software.
953
954
The method, _finish_, then does the remaining tasks. It creates the summary file and emails it to the contact specified by the option parsing.  It then posts all log and build files to Orbiter.
955
956
## Package files 
957
958
Package files define how a package is acquired, how it is configured for building on the particular platform for all builds, how all builds are done, and how they are all installed.  Here we introduce an important distinction: the **tarball packages** are those obtained in the tar.gz format; the **repo packages** are obtained from a Subversion source code repo. Generic tarball packages are found in the Tech-X maintained Subversion repo at https://ice.txcorp.com/svnrepos/code/numpkgs and are available by anonymous svn. The repo packages are typically svn externals to a Bilder project, e.g.,
959
960
~~~~~~
961
    $ svn pg svn:externals .
962
    bilder svn://svn.code.sf.net/p/bilder/code/trunk
963
    visit http://portal.nersc.gov/svn/visit/trunk/src
964
    visitresources http://portal.nersc.gov/svn/visit/trunk/windowsbuild/resources
965
~~~~~~
966
967
Though written in _Bash_, Bilder uses object concepts. Each package file under packages acts an object, with instantiation, exposed (public) data members, private data members, and methods.  As in OO, these **package-build** objects have the same data members and a common interface.
968
969
Instantiation is carried out by sourcing the package file. At this point, the data associated with that package is initialized as necessary.
970
971
### Package-build data 
972
973
The public data members for a package _PKG__ are
974
975
~~~~~~
976
    PKG_BLDRVERSION # Either the version to install or the
977
                    # version from the code repository
978
    PKG_DEPS        # The dependencies of this package
979
    PKG_BUILDS      # The names of the builds for this package
980
    PKG_UMASK       # The "umask" that determines the permissions
981
                    # for installation of this package
982
~~~~~~
983
984
In the syntax of C++, the first underscore would be represented by '.', i.e., pgk.DEPS. Even dynamic binding can be implemented in _Bash_.  I.e., if one
985
has _pkgname_ that holds the name of a package, one can can extract, e.g., BLDRVERSION via
986
987
~~~~~~
988
    vervar=`echo $pkgname | tr 'a-z./-' 'A-Z___'`_BLDRVERSION
989
~~~~~~
990
991
Admittedly, many of these constructs would more easily be accomplished in a language like Python that naturally supports object orientation. The trade-off is that then one does not have the nearly trivial expression of executable invocation or threading that one has in _Bash_.
992
993
In addition, there are the per-build, public variables _PKG_BUILD_OTHER_ARGS_ (e.g., _FACETS_PAR_OTHER_ARGS_ or _BABEL_STATIC_OTHER_ARGS_. These are added to the command-line when configuring a package.  In some cases, a package has more than one builds system, like HDF5, in which case one has two sets of variables, e.g., _HDF5_PAR_CMAKE_OTHER_ARGS_ and _HDF5_PAR_CONFIG_OTHER_ARGS_.
994
995
### Exposed package-build methods 
996
997
All package files are supposed to provide three methods, e.g., _buildPkg_, _testPkg_, and _installPkg_, where "Pkg" is the name of the package being built.  E.g., FACETS has buildFacets, testFacets, installFacets. For untested packages, the second method can simply be empty.
998
999
The method, _buildPkg_, is supposed to determine whether a package needs to be built.  If so, it should either acquire a tarball package or preconfigure (prepare the build system for) a repo package, then configure the package, and finally launch the builds of the package.  Preconfiguring in the example of an _autotools_ package involves invoking the _autoreconf_ and other executables for creating the various configuration scripts.  In many other cases there is no associated action.  If the Bilder infrastructure is used, then all builds are executed in a separate thread, and at the end of the _buildPkg_ method all the process IDs for these builds have been stored in both the variable PIDLIST, and the particular process ID for build "ser" of package "pkg" is stored in the variable, PKG_SER_PID.
1000
1001
The method, _testPkg_, is supposed to determine whether a package is being tested.  If not, it simply returns.  But if the package is being tested, then _testPkg_ executes _wait_ for each build.  Upon successful completion of
1002
all builds, the tests are launched.  These are treated just like builds, so the process IDs are stored as in the case of builds.
1003
1004
The last method, _installPkg_, in the case of a tested package, waits for the tests to complete, then installs the package if the tests completed successfully, after which is sets the tests as being installed, so that tests will not be
1005
run again unless the version or dependencies of the package change.  In the other case, where the package is not being tested, it waits for the builds to complete and installs any successful builds.
1006
1007
All three methods for any package are supposed to compensate for any errors or omissions in the build systems. Errors include fixing up library dependencies on Darwin, setting permissions of the installed software, and so forth. 
1008
1009
The object-oriented analogy is that each package-build object has an interface with three methods.  The syntax translation is _buildPkg_ -> _pkg.build_.
1010
1011
### Private package-build data 
1012
1013
In the course of its build, any package will generate other variables with values. These are organized on a per-build basis, and so one can think of each 
1014
package-build object as containing an object for each build of that package.  
1015
1016
### Internal objects 
1017
1018
Builds
1019
1020
Tests
1021
1022
###  Combined package objects 
1023
1024
###  Future directions
1025
1026
Dependency determination.
1027
1028
1029
# Linear Algebra Libraries in Bilder
1030
1031
There are a wide variety of ways to get LAPACK and BLAS: Netlib's libraries (reference LAPACK and BLAS), CLapack (for when one does not have a Fortran compilers), ATLAS (for cpu-tuned libraries), GOTOBLAS (from TACC), and system libraries (MKL, ACML).
1032
1033
For numpy and all things that depend on it, Bilder uses ATLAS (if it has been built), and otherwise it uses LAPACK.
1034
1035
For non Python packages, the current logic is
1036
1037
## Darwin 
1038
1039
Always use -framework Accelerate
1040
1041
## Linux and Windows 
1042
1043
* SYSTEM_LAPACK_SER_LIB and SYSTEM_BLAS_SER_LIB are used if set.
1044
* Otherwise, if USE_ATLAS is true, then ATLAS is used.
1045
* Otherwise, use Netlib LAPACK if that is found.
1046
* Otherwise
1047
    * If on Windows, use CLAPACK
1048
    * If on Linux, use any system blas/lapack
1049
1050
The results of the search are put into the variables, CMAKE_LINLIB_SER_ARGS, CONFIG_LINLIB_SER_ARGS, LINLIB_SER_LIBS.
1051
1052
1053
# Extending Bilder
1054
1055
Bilder builds packages using the general logic in bildfcns.sh, the operating-system logic in bildvars.sh,
1056
logic for a particular package in the _Bilder package file_ (kept in the packages subdir), logic for a
1057
particular machine in the _Bilder machine file_ (kept in the machines subdir), and additional settings for a particular package on a particular machine in the Bilder machine file.  To extend Bilder, one adds the files that introduce the particular logic for a package or a machine.
1058
1059
* [[Adding support for a new package]]
1060
* [[Adding support for a new machine or operating system/compiler combination]]
1061
1062
# Debugging your builds
1063
1064
This section describes some of the things that can go wrong and explains how to fix them.
1065
1066
# Bilder With IDEs
1067
1068
This is a page to collect notes on using IDEs to develop code while at the same time using bilder to build the project.
1069
1070
* Reference for using Eclipse with CMake, http://www.vtk.org/Wiki/CMake:Eclipse_UNIX_Tutorial