Before following these steps, please coordinate with portmgr.
portmgr is still working on characterizing what a node needs to be generally useful.
CPU capacity: TBA. However, we have several dual-CPU P-III i386™ 1.0GHz machines available, so anything with less horsepower than that is not as likely to be useful. (However, many of our Sparc64®s are single-CPU, 500MHz machines, so our requirements are lower.)
Note: We are able to adjust the number of jobs dispatched to each machine, and we generally tune the number to use 100% of CPU.
RAM: TBA. Again, we have been tuning to one job per 512M of RAM. (Anything less than 512M is very unlikely to be useful.)
disk: at least 20G is needed for filesystem; 32G is needed for swap. Best performance will be if multiple disks are used, and configured as geom stripes. Performance numbers are also TBA.
Note: Package building will test disk drives to destruction. Be aware of what you are signing up for!
network bandwidth: TBA. However, an 8-job machine has been shown to saturate a cable modem line.
Pick a unique hostname. It does not have to be a publicly resolvable hostname (it can be a name on your internal network).
By default, package building requires the following TCP ports to be accessible: 22 (ssh), 414 (infoseek), and 8649 (ganglia). If these are not accessible, pick others and ensure that an ssh tunnel is set up (see below).
(Note: if you have more than one machine at your site, you will need an individual TCP port for each service on each machine, and thus ssh tunnels will be necessary. As such, you will probably need to configure port forwarding on your firewall.)
Decide if you will be booting natively or via pxeboot. You will find that it is easier to keep up with changes to -current with the latter, especially if you have multiple machines at your site.
Pick a directory to hold ports configuration and chroot subdirectories. It may be best to put it this on its own partition. (Example: /usr2/.)
Create a directory to contain the latest -current source tree and check it out. (Since your machine will likely be asked to build packages for -current, the kernel it runs should be reasonably up-to-date with the bindist that will be exported by our scripts.)
If you are using pxeboot: create a directory to contain the install bits. You will probably want to use a subdirectory of /pxeroot, e.g., /pxeroot/${arch}-${branch}. Export that as DESTDIR.
If you are cross-building, export TARGET_ARCH=${arch}.
Note: The procedure for cross-building ports is not yet defined.
Generate a kernel config file. Include GENERIC (or, if you are using more than 3.5G on i386, PAE).
Required options:
options NULLFS options TMPFS
Suggested options:
options GEOM_CONCAT options GEOM_STRIPE options SHMMAXPGS=65536 options SEMMNI=40 options SEMMNS=240 options SEMUME=40 options SEMMNU=120 options ALT_BREAK_TO_DEBUGGER options PRINTF_BUFR_SIZE=128
For PAE, it is not currently possible to load modules. Therefore, if you are running an architecture that supports Linux emulation, you will need to add:
options COMPAT_LINUX options LINPROCFS
As root, do the usual build steps, e.g.:
make -j4 buildworld make buildkernel KERNCONF=${kernconf} make installkernel KERNCONF=${kernconf} make installworldThe install steps use DESTDIR.
Customize files in etc/. Whether you do this on the client itself, or another machine, will depend on whether you are using pxeboot.
If you are using pxeboot: create a subdirectory of ${DESTDIR} called conf/. Create one subdirectory default/etc/, and (if your site will host multiple nodes), subdirectories ${ip-address}/etc/ to contain override files for individual hosts. (You may find it handy to symlink each of those directories to a hostname.) Copy the entire contents of ${DESTDIR}/etc/ to default/etc/; that is where you will edit your files. The by-ip-address etc/ directories will probably only need customized rc.conf files.
In either case, apply the following steps:
Create a ports-${arch} user and group. Add it to the wheel group. It can have the '*' password.
Create /home/ports-${arch}/.ssh/ and populate authorized_keys.
Also add the following users:
squid:*:100:100::0:0:User &:/usr/local/squid:/bin/sh ganglia:*:102:102::0:0:User &:/usr/local/ganglia:/bin/sh
Add them to etc/group as well.
Create the appropriate files in etc/.ssh/.
In etc/crontab: add
* * * * * root /var/portbuild/scripts/client-metrics
Create the appropriate etc/fstab. (If you have multiple, different, machines, you will need to put those in the override directories.)
In etc/inetd.conf: add
infoseek stream tcp nowait nobody /var/portbuild/scripts/reportload reportload ${arch}
We run the cluster on UTC:
cp /usr/share/zoneinfo/Etc/UTC etc/localtime
Create the appropriate etc/rc.conf. (If you are using pxeboot, and have multiple, different, machines, you will need to put those in the override directories.)
Recommended entries:
hostname="${hostname} inetd_enable="YES" linux_enable="YES" nfs_client_enable="YES" ntpd_enable="YES" ntpdate_enable="YES" ntpdate_flags="north-america.pool.ntp.org" sendmail_enable="NONE" sshd_enable="YES" sshd_program="/usr/local/sbin/sshd" gmond_enable="YES" squid_enable="YES" squid_chdir="/usr2/squid/logs" squid_pidfile="/usr2/squid/logs/squid.pid"
Create etc/resolv.conf, if necessary.
Modify etc/sysctl.conf:
9a10,30 > kern.corefile=/usr2/%N.core > kern.sugid_coredump=1 > #debug.witness_ddb=0 > #debug.witness_watch=0 > > # squid needs a lot of fds (leak?) > kern.maxfiles=40000 > kern.maxfilesperproc=30000 > > # Since the NFS root is static we don't need to check frequently for file changes > # This saves >75% of NFS traffic > vfs.nfs.access_cache_timeout=300 > debug.debugger_on_panic=1 > > # For jailing > security.jail.sysvipc_allowed=1 > security.jail.allow_raw_sockets=1 > security.jail.chflags_allowed=1 > security.jail.enforce_statfs=1 > > vfs.lookup_shared=1
If desired, modify etc/syslog.conf to change the logging destinations to @pointyhat.freebsd.org.
Install the following ports:
net/rsync security/openssh-portable (with HPN on) security/sudo sysutils/ganglia-monitor-core (with GMETAD off) www/squid (with SQUID_AUFS on)
There is a WIP to create a meta-port, but it is not yet complete.
Customize files in usr/local/etc/. Whether you do this on the client itself, or another machine, will depend on whether you are using pxeboot.
Note: The trick of using conf override subdirectories is less effective here, because you would need to copy over all subdirectories of usr/. This is an implementation detail of how the pxeboot works.
Apply the following steps:
Modify usr/local/etc/gmond.conf:
21,22c21,22 < name = "unspecified" < owner = "unspecified" --- > name = "${arch} package build cluster" > owner = "portmgr@FreeBSD.org" 24c24 < url = "unspecified" --- > url = "http://pointyhat.freebsd.org"
If there are machines from more than one cluster in the same multicast domain (basically = LAN) then change the multicast groups to different values (.71, .72, etc).
Create usr/local/etc/rc.d/portbuild.sh, using the appropriate value for scratchdir:
#!/bin/sh # # Configure a package build system post-boot scratchdir=/usr2 ln -sf ${scratchdir}/portbuild /var/ # Identify builds ready for use cd /var/portbuild/${arch} for i in */builds/*; do if [ -f ${i}/.ready ]; then mkdir /tmp/.setup-${i##*/} fi done # Flag that we are ready to accept jobs touch /tmp/.boot_finished
Modify usr/local/etc/squid/squid.conf:
288,290c288,290 < #auth_param basic children 5 < #auth_param basic realm Squid proxy-caching web server < #auth_param basic credentialsttl 2 hours --- > auth_param basic children 5 > auth_param basic realm Squid proxy-caching web server > auth_param basic credentialsttl 2 hours 611a612 > acl localnet src 127.0.0.0/255.0.0.0 655a657 > http_access allow localnet 2007a2011 > maximum_object_size 400 MB 2828a2838 > negative_ttl 0 minutes
Also, change usr/local to usr2 in cache_dir, access_log, cache_log, cache_store_log, pid_filename, netdb_filename, coredump_dir.
Finally, change the cache_dir storage scheme from ufs to aufs (offers better performance).
Configure ssh: copy /etc/ssh to /usr/local/etc/ssh and add NoneEnabled yes to sshd_config.
Modify usr/local/etc/sudoers:
38a39,42 > > # local changes for package building > %wheel ALL=(ALL) ALL > ports-${arch} ALL=(ALL) NOPASSWD: ALL
Change into the port/package directory you picked above, e.g., cd /usr2.
As root:
mkdir portbuild chown ports-${arch}:ports-${arch} portbuild mkdir pkgbuild chown ports-${arch}:ports-${arch} pkgbuild mkdir squid mkdir squid/cache mkdir squid/logs chown -R squid:squid squid
If clients preserve /var/portbuild between boots then they must either preserve their /tmp, or revalidate their available builds at boot time (see the script on the amd64 machines). They must also clean up stale chroots from previous builds before creating /tmp/.boot_finished.
Boot the client.
As root, initialize the squid directories:
squid -z
These steps need to be taken by a portmgr acting as root on pointyhat.
If any of the default TCP ports is not available (see above), you will need to create an ssh tunnel for it and include it in the appropriate crontab.
Add an entry to /home/ports-${arch}/.ssh/config to specify the public IP address, TCP port for ssh, username, and any other necessary information.
Add the public IP address to /etc/hosts.allow. (Remember, multiple machines can be on the same IP address.)
Create /var/portbuild/${arch}/clients/bindist-${hostname}.tar.
Copy one of the existing ones as a template and unpack it in a temporary directory.
Customize etc/resolv.conf and etc/make.conf for the local site.
tar it up and move it to the right location.
Hint: you will need one of these for each machine; however, if you have multiple machines at one site, you may be able to create a site-specific one and symlink to it.
Create /var/portbuild/${arch}/portbuild-${hostname} using one of the existing ones as a guide. This file contains overrides to /var/portbuild/${arch}/portbuild.conf.
Suggested values:
disconnected=1 http_proxy="http://localhost:3128/" squid_dir=/usr2/squid scratchdir=/usr2/pkgbuild client_user=ports-${arch} sudo_cmd="sudo -H" rsync_gzip=-z infoseek_host=localhost infoseek_port=${tunelled-tcp-port}
Possible other values:
use_md_swap=1 md_size=9g use_zfs=1 scp_cmd="/usr/local/bin/scp" ssh_cmd="/usr/local/bin/ssh"
Add an appropriate data_source entry to /usr/local/etc/gmetad.conf:
data_source "arch/location Package Build Cluster" 30 hostname
You will need to restart gmetad.
These steps need to be taken by a portmgr acting as ports-arch on pointyhat.
Ensure that ssh is working by executing ssh hostname.
Populate /var/portbuild/scripts/ by something like /var/portbuild/dosetupnode arch major latest hostname. Verify that you now have files in that directory.
Test the other TCP ports by executing telnet hostname portnumber. 414 (or its tunnel) should give you a few lines of status information including arch and osversion; 8649 should give you an XML response from ganglia.
This step needs to be taken by a portmgr acting as root on pointyhat.
Tell qmanager about the node. Example:
python /var/portbuild/evil/qmanager/qclient add name=uniquename arch=arch osversion=osversion numcpus=number haszfs=0 online=1 domain=domain primarypool=package pools="package all" maxjobs=1 acl="ports-arch,deny_all"
This, and other documents, can be downloaded from ftp://ftp.FreeBSD.org/pub/FreeBSD/doc/.
For questions about FreeBSD, read the documentation before contacting <questions@FreeBSD.org>.
For questions about this documentation, e-mail <doc@FreeBSD.org>.