Jim Mock Restructured, reorganized, and parts updated by Jordan Hubbard Original work by Poul-Henning Kamp John Polstra Nik Clayton Updating and Upgrading &os; Synopsis &os; is under constant development between releases. Some people prefer to use the officially released versions, while others prefer to keep in sync with the latest developments. However, even official releases are often updated with security and other critical fixes. Regardless of the version used, &os; provides all necessary tools to keep your system updated, and also allows for easy upgrades between versions. This chapter will help you decide if you want to track the development system, or stick with one of the released versions. The basic tools for keeping your system up to date are also presented. After reading this chapter, you will know: What utilities may be used to update the system and the Ports Collection. How to keep your system up to date with freebsd-update, CVSup, CVS, or CTM. How to compare the state of an installed system against a known pristine copy. How to keep your documentation up to date with CVSup or documentation ports. The difference between the two development branches: &os.stable; and &os.current;. How to rebuild and reinstall the entire base system with make buildworld (etc). Before reading this chapter, you should: Properly set up your network connection (). Know how to install additional third-party software (). Throughout this chapter, the cvsup command is used to obtain and update &os; sources. To use it, you will need to install the port or the package for net/cvsup (if you do not want to install the graphical cvsup client, you can just install the port net/cvsup-without-gui). You may wish to substitute this with &man.csup.1;, which is part of the base system. Tom Rhodes Written by Colin Percival Based on notes provided by FreeBSD Update Updating and Upgrading freebsd-update updating-upgrading Applying security patches is an important part of maintaining computer software, especially the operating system. For the longest time on &os; this process was not an easy one. Patches had to be applied to the source code, the code rebuilt into binaries, and then the binaries had to be re-installed. This is no longer the case as &os; now includes a utility simply called freebsd-update. This utility provides two separate functions. First, it allows for binary security and errata updates to be applied to the &os; base system without the build and install requirements. Second, the utility supports minor and major release upgrades. Binary updates are available for all architectures and releases currently supported by the security team. Before updating to a new release, the current release announcements should be reviewed as they may contain important information pertinent to the desired release. These announcements may be viewed at the following link: . If a crontab utilizing the features of freebsd-update exists, it must be disabled before the following operation is started. The Configuration File Some users may wish to tweak the default configuration file in /etc/freebsd-update.conf, allowing better control of the process. The options are very well documented, but the following few may require a bit more explanation: # Components of the base system which should be kept updated. Components src world kernel This parameter controls what parts of &os; will be kept up to date. The default is to update the source code, the entire base system, and the kernel. Components are the same as those available during the install, for instance, adding world/games here would allow game patches to be applied. Using src/bin would allow the source code in src/bin to be updated. The best option is to leave this at the default as changing it to include specific items will require the user to list every item they prefer to be updated. This could have disastrous consequences as source code and binaries may become out of sync. # Paths which start with anything matching an entry in an IgnorePaths # statement will be ignored. IgnorePaths Add paths, such as /bin or /sbin to leave these specific directories untouched during the update process. This option may be used to prevent freebsd-update from overwriting local modifications. # Paths which start with anything matching an entry in an UpdateIfUnmodified # statement will only be updated if the contents of the file have not been # modified by the user (unless changes are merged; see below). UpdateIfUnmodified /etc/ /var/ /root/ /.cshrc /.profile Update configuration files in the specified directories only if they have not been modified. Any changes made by the user will invalidate the automatic updating of these files. There is another option, KeepModifiedMetadata, which will instruct freebsd-update to save the changes during the merge. # When upgrading to a new &os; release, files which match MergeChanges # will have any local changes merged into the version from the new release. MergeChanges /etc/ /var/named/etc/ List of directories with configuration files that freebsd-update should attempt merges in. The file merge process is a series of &man.diff.1; patches similar to &man.mergemaster.8; with fewer options, the merges are either accepted, open an editor, or freebsd-update will abort. When in doubt, backup /etc and just accept the merges. See for more information about the mergemaster command. # Directory in which to store downloaded updates and temporary # files used by &os; Update. # WorkDir /var/db/freebsd-update This directory is where all patches and temporary files will be placed. In cases where the user is doing a version upgrade, this location should have a least a gigabyte of disk space available. # When upgrading between releases, should the list of Components be # read strictly (StrictComponents yes) or merely as a list of components # which *might* be installed of which &os; Update should figure out # which actually are installed and upgrade those (StrictComponents no)? # StrictComponents no When set to yes, freebsd-update will assume that the Components list is complete and will not attempt to make changes outside of the list. Effectively, freebsd-update will attempt to update every file which belongs to the Components list. Security Patches Security patches are stored on a remote machine and may be downloaded and installed using the following command: &prompt.root; freebsd-update fetch &prompt.root; freebsd-update install If any kernel patches have been applied the system will need a reboot. If all went well the system should be patched and freebsd-update may be run as a nightly &man.cron.8; job. An entry in /etc/crontab would be sufficient to accomplish this task: @daily root freebsd-update cron This entry states that once every day, the freebsd-update utility will be run. In this way, using the argument, freebsd-update will only check if updates exist. If patches exist, they will automatically be downloaded to the local disk but not applied. The root user will be sent an email so they may install them manually. If anything went wrong, freebsd-update has the ability to roll back the last set of changes with the following command: &prompt.root; freebsd-update rollback Once complete, the system should be restarted if the kernel or any kernel modules were modified. This will allow &os; to load the new binaries into memory. The freebsd-update utility can automatically update the GENERIC kernel only. If a custom kernel is in use, it will have to be rebuilt and reinstalled after freebsd-update finishes installing the rest of the updates. However, freebsd-update will detect and update the GENERIC kernel in /boot/GENERIC (if it exists), even if it is not the current (running) kernel of the system. It is a good idea to always keep a copy of the GENERIC kernel in /boot/GENERIC. It will be helpful in diagnosing a variety of problems, and in performing version upgrades using freebsd-update as described in . Unless the default configuration in /etc/freebsd-update.conf has been changed, freebsd-update will install the updated kernel sources along with the rest of the updates. Rebuilding and reinstalling your new custom kernel can then be performed in the usual way. The updates distributed via freebsd-update, do not always involve the kernel. It will not be necessary to rebuild your custom kernel if the kernel sources have not been modified by the execution of freebsd-update install. However, freebsd-update will always update the /usr/src/sys/conf/newvers.sh file. The current patch level (as indicated by the -p number reported by uname -r) is obtained from this file. Rebuilding your custom kernel, even if nothing else changed, will allow &man.uname.1; to accurately report the current patch level of the system. This is particularly helpful when maintaining multiple systems, as it allows for a quick assessment of the updates installed in each one. Major and Minor Upgrades This process will remove old object files and libraries which will break most third party applications. It is recommended that all installed ports either be removed and re-installed or upgraded later using the ports-mgmt/portupgrade utility. Most users will want to run a test build using the following command: &prompt.root; portupgrade -af This will ensure everything will be re-installed correctly. Note that setting the BATCH environment variable to yes will answer yes to any prompts during this process, removing the need for manual intervention during the build process. If a custom kernel is in use, the upgrade process is slightly more involved. A copy of the GENERIC kernel is needed, and it should be placed in /boot/GENERIC. If the GENERIC kernel is not already present in the system, it may be obtained using one of the following methods: If a custom kernel has only been built once, the kernel in /boot/kernel.old is actually the GENERIC one. Simply rename this directory to /boot/GENERIC. Assuming physical access to the machine is possible, a copy of the GENERIC kernel can be installed from the CD-ROM media. Insert your installation disc and use the following commands: &prompt.root; mount /cdrom &prompt.root; cd /cdrom/X.Y-RELEASE/kernels &prompt.root; ./install.sh GENERIC Replace X.Y-RELEASE with the actual version of the release you are using. The GENERIC kernel will be installed in /boot/GENERIC by default. Failing all the above, the GENERIC kernel may be rebuilt and installed from the sources: &prompt.root; cd /usr/src &prompt.root; env DESTDIR=/boot/GENERIC make kernel &prompt.root; mv /boot/GENERIC/boot/kernel/* /boot/GENERIC &prompt.root; rm -rf /boot/GENERIC/boot For this kernel to be picked up as GENERIC by freebsd-update, the GENERIC configuration file must not have been modified in any way. It is also suggested that it is built without any other special options (preferably with an empty /etc/make.conf). Rebooting to the GENERIC kernel is not required at this stage. Major and minor version updates may be performed by providing freebsd-update with a release version target, for example, the following command will update to &os; 8.1: &prompt.root; freebsd-update -r 8.1-RELEASE upgrade After the command has been received, freebsd-update will evaluate the configuration file and current system in an attempt to gather the information necessary to update the system. A screen listing will display what components have been detected and what components have not been detected. For example: Looking up update.FreeBSD.org mirrors... 1 mirrors found. Fetching metadata signature for 8.0-RELEASE from update1.FreeBSD.org... done. Fetching metadata index... done. Inspecting system... done. The following components of FreeBSD seem to be installed: kernel/smp src/base src/bin src/contrib src/crypto src/etc src/games src/gnu src/include src/krb5 src/lib src/libexec src/release src/rescue src/sbin src/secure src/share src/sys src/tools src/ubin src/usbin world/base world/info world/lib32 world/manpages The following components of FreeBSD do not seem to be installed: kernel/generic world/catpages world/dict world/doc world/games world/proflibs Does this look reasonable (y/n)? y At this point, freebsd-update will attempt to download all files required for the upgrade. In some cases, the user may be prompted with questions regarding what to install or how to proceed. When using a custom kernel, the above step will produce a warning similar to the following: WARNING: This system is running a "MYKERNEL" kernel, which is not a kernel configuration distributed as part of FreeBSD 8.0-RELEASE. This kernel will not be updated: you MUST update the kernel manually before running "/usr/sbin/freebsd-update install" This warning may be safely ignored at this point. The updated GENERIC kernel will be used as an intermediate step in the upgrade process. After all patches have been downloaded to the local system, they will then be applied. This process may take a while depending on the speed and workload of the machine. Configuration files will then be merged — this part of the process requires some user intervention as a file may be merged or an editor may appear on screen for a manual merge. The results of every successful merge will be shown to the user as the process continues. A failed or ignored merge will cause the process to abort. Users may wish to make a backup of /etc and manually merge important files, such as master.passwd or group at a later time. The system is not being altered yet, all patching and merging is happening in another directory. When all patches have been applied successfully, all configuration files have been merged and it seems the process will go smoothly, the changes will need to be committed by the user. Once this process is complete, the upgrade may be committed to disk using the following command. &prompt.root; freebsd-update install The kernel and kernel modules will be patched first. At this point the machine must be rebooted. If the system was running with a custom kernel, use the &man.nextboot.8; command to set the kernel for the next boot to /boot/GENERIC (which was updated): &prompt.root; nextboot -k GENERIC Before rebooting with the GENERIC kernel, make sure it contains all drivers required for your system to boot properly (and connect to the network, if the machine that is being updated is accessed remotely). In particular, if the previously running custom kernel contained built-in functionality usually provided by kernel modules, make sure to temporarily load these modules into the GENERIC kernel using the /boot/loader.conf facility. You may also wish to disable non-essential services, disk and network mounts, etc. until the upgrade process is complete. The machine should now be restarted with the updated kernel: &prompt.root; shutdown -r now Once the system has come back online, freebsd-update will need to be started again. The state of the process has been saved and thus, freebsd-update will not start from the beginning, but will remove all old shared libraries and object files. To continue to this stage, issue the following command: &prompt.root; freebsd-update install Depending on whether any libraries version numbers got bumped, there may only be two install phases instead of three. All third party software will now need to be rebuilt and re-installed. This is required as installed software may depend on libraries which have been removed during the upgrade process. The ports-mgmt/portupgrade command may be used to automate this process. The following commands may be used to begin this process: &prompt.root; portupgrade -f ruby &prompt.root; rm /var/db/pkg/pkgdb.db &prompt.root; portupgrade -f ruby18-bdb &prompt.root; rm /var/db/pkg/pkgdb.db /usr/ports/INDEX-*.db &prompt.root; portupgrade -af Once this has completed, finish the upgrade process with a final call to freebsd-update. Issue the following command to tie up all loose ends in the upgrade process: &prompt.root; freebsd-update install If the GENERIC kernel was temporarily used, this is the time to build and install a new custom kernel in the usual way. Reboot the machine into the new &os; version. The process is complete. System State Comparison The freebsd-update utility may be used to test the state of the installed &os; version against a known good copy. This option evaluates the current version of system utilities, libraries, and configuration files. To begin the comparison, issue the following command: &prompt.root; freebsd-update IDS >> outfile.ids While the command name is IDS it should in no way be a replacement for an intrusion detection system such as security/snort. As freebsd-update stores data on disk, the possibility of tampering is evident. While this possibility may be reduced by using the kern.securelevel setting and storing the freebsd-update data on a read only file system when not in use, a better solution would be to compare the system against a secure disk, such as a DVD or securely stored external USB disk device. The system will now be inspected, and a list of files along with their &man.sha256.1; hash values, both the known value in the release and the current installed value, will be printed. This is why the output has been sent to the outfile.ids file. It scrolls by too quickly for eye comparisons, and soon it fills up the console buffer. These lines are also extremely long, but the output format may be parsed quite easily. For instance, to obtain a list of all files different from those in the release, issue the following command: &prompt.root; cat outfile.ids | awk '{ print $1 }' | more /etc/master.passwd /etc/motd /etc/passwd /etc/pf.conf This output has been truncated, many more files exist. Some of these files have natural modifications, the /etc/passwd has been modified because users have been added to the system. In some cases, there may be other files, such as kernel modules, which differ as freebsd-update may have updated them. To exclude specific files or directories, add them to the IDSIgnorePaths option in /etc/freebsd-update.conf. This system may be used as part of an elaborate upgrade method, aside from the previously discussed version. Tom Rhodes Written by Colin Percival Based on notes provided by Portsnap: A Ports Collection Update Tool Updating and Upgrading Portsnap Updating and Upgrading The base system of &os; includes a utility for updating the Ports Collection too: the &man.portsnap.8; utility. Upon execution, it will connect to a remote site, verify the secure key, and download a new copy of the Ports Collection. The key is used to verify the integrity of all downloaded files, ensuring they have not been modified in-flight. To download the latest Ports Collection files, issue the following command: &prompt.root; portsnap fetch Looking up portsnap.FreeBSD.org mirrors... 9 mirrors found. Fetching snapshot tag from geodns-1.portsnap.freebsd.org... done. Fetching snapshot metadata... done. Updating from Tue May 22 02:12:15 CEST 2012 to Wed May 23 16:28:31 CEST 2012. Fetching 3 metadata patches.. done. Applying metadata patches... done. Fetching 3 metadata files... done. Fetching 90 patches.....10....20....30....40....50....60....70....80....90. done. Applying patches... done. Fetching 133 new ports or files... done. What this example shows is that &man.portsnap.8; has found and verified several patches to the current ports data. This also indicates that the utility was run previously, if it was a first time run, the collection would have simply been downloaded. When &man.portsnap.8; successfully completes a fetch operation, the Ports Collection and subsequent patches exist on the local system that have passed verification. The first time portsnap is executed, you have to use extract to install the downloaded files: &prompt.root; portsnap extract /usr/ports/.cvsignore /usr/ports/CHANGES /usr/ports/COPYRIGHT /usr/ports/GIDs /usr/ports/KNOBS /usr/ports/LEGAL /usr/ports/MOVED /usr/ports/Makefile /usr/ports/Mk/bsd.apache.mk /usr/ports/Mk/bsd.autotools.mk /usr/ports/Mk/bsd.cmake.mk ... To update an already installed Ports Collection use the command portsnap update: &prompt.root; portsnap update The process is now complete, and applications may be installed or upgraded using the updated Ports Collection. The fetch and extract or update operations may be run consecutively, as shown in the following example: &prompt.root; portsnap fetch update This command will download the latest version of the Ports Collection and update your local version under /usr/ports. Updating the Documentation Set Updating and Upgrading Documentation Updating and Upgrading Besides the base system and the Ports Collection, documentation is an integral part of the &os; operating system. While an up-to-date version of the &os; Documentation Set is always available on the &os; web site, some users might have slow or no permanent network connectivity at all. Fortunately, there are several ways to update the documentation shipped with each release by maintaining a local copy of the latest &os; Documentation Set. Using CVSup to Update the Documentation The sources and the installed copy of the &os; documentation can be updated with CVSup, using a mechanism similar to the one employed for the base system sources (c.f. ). This section describes: How to install the documentation toolchain, the tools that are required to rebuild the &os; documentation from its source. How to download a copy of the documentation source at /usr/doc, using CVSup. How to rebuild the &os; documentation from its source, and install it under /usr/share/doc. Some of the build options that are supported by the build system of the documentation, i.e., the options that build only some of the different language translations of the documentation or the options that select a specific output format. Installing CVSup and the Documentation Toolchain Rebuilding the &os; documentation from source requires a fairly large collection of tools. These tools are not part of the &os; base system, because they need a large amount of disk space and they are not useful to all &os; users; they are only useful to those users that are actively writing new documentation for &os; or are frequently updating their documentation from source. All the required tools are available as part of the Ports Collection. The textproc/docproj port is a master port that has been developed by the &os; Documentation Project, to ease the initial installation and future updates of these tools. When no &postscript; or PDF documentation required, one might consider installing the textproc/docproj-nojadetex port instead. This version of the documentation toolchain includes everything except the teTeX typesetting engine. teTeX is a very large collection of tools, so it may be quite sensible to omit its installation if PDF output is not really necessary. For more information on installing and using CVSup, see Using CVSup. Updating the Documentation Sources The CVSup utility can fetch a clean copy of the documentation sources, using the /usr/share/examples/cvsup/doc-supfile file as a configuration template. The default update host is set to a placeholder value in doc-supfile, but &man.cvsup.1; accepts a host name through the command line, so the documentation sources can be fetched from one of the CVSup servers by typing: &prompt.root; cvsup -h cvsup.FreeBSD.org -g -L 2 /usr/share/examples/cvsup/doc-supfile Change cvsup.FreeBSD.org to the nearest CVSup server. See for a complete listing of mirror sites. The initial download of the documentation sources may take a while. Let it run until it completes. Future updates of the documentation sources may be fetched by running the same command. The CVSup utility downloads and copies only the updates since the last time it ran, so every run of CVSup after the first complete run should be pretty fast. After checking out the sources, an alternative way of updating the documentation is supported by the Makefile of the /usr/doc directory. By setting SUP_UPDATE, SUPHOST and DOCSUPFILE in the /etc/make.conf file, it is possible to run: &prompt.root; cd /usr/doc &prompt.root; make update A typical set of these &man.make.1; options for /etc/make.conf is: SUP_UPDATE= yes SUPHOST?= cvsup.freebsd.org DOCSUPFILE?= /usr/share/examples/cvsup/doc-supfile Setting the SUPHOST and DOCSUPFILE value with ?= permits overriding them in the command-line of make. This is the recommended way of adding options to make.conf, to avoid having to edit the file every time a different option value has to be tested. Tunable Options of the Documentation Sources The updating and build system of the &os; documentation supports a few options that ease the process of updating only parts of the documentation, or the build of specific translations. These options can be set either as system-wide options in the /etc/make.conf file, or as command-line options passed to the &man.make.1; utility. The following options are some of these: DOC_LANG The list of languages and encodings to build and install, e.g., en_US.ISO8859-1 for the English documentation only. FORMATS A single format or a list of output formats to be built. Currently, html, html-split, txt, ps, pdf, and rtf are supported. SUPHOST The hostname of the CVSup server to use when updating. DOCDIR Where to install the documentation. It defaults to /usr/share/doc. For more make variables supported as system-wide options in &os;, see &man.make.conf.5;. For more make variables supported by the build system of the &os; documentation, please refer to the &os; Documentation Project Primer for New Contributors. Installing the &os; Documentation from Source When an up-to-date snapshot of the documentation sources has been fetched in /usr/doc, everything is ready for an update of the installed documentation. A full update of all the languages defined in the DOC_LANG makefile option may be done by typing: &prompt.root; cd /usr/doc &prompt.root; make install clean If make.conf has been set up with the correct DOCSUPFILE, SUPHOST and SUP_UPDATE options, the install step may be combined with an update of the documentation sources by typing: &prompt.root; cd /usr/doc &prompt.root; make update install clean If an update of only a specific language is desired, &man.make.1; can be invoked in a language specific subdirectory of /usr/doc, i.e.: &prompt.root; cd /usr/doc/en_US.ISO8859-1 &prompt.root; make update install clean The output formats that will be installed may be specified by setting the FORMATS make variable, i.e.: &prompt.root; cd /usr/doc &prompt.root; make FORMATS='html html-split' install clean Marc Fonvieille Based on the work of Using Documentation Ports Updating and Upgrading documentation package Updating and Upgrading In the previous section, we have presented a method for updating the &os; documentation from sources. Source based updates may not be feasible or practical for all &os; systems though. Building the documentation sources requires a fairly large collection of tools and utilities, the documentation toolchain, a certain level of familiarity with CVS and source checkouts from a repository, and a few manual steps to build the checked out sources. In this section, we describe an alternative way of updating the installed copies of the &os; documentation; one that uses the Ports Collection and makes it possible to: Download and install pre-built snaphots of the documentation, without having to locally build anything (eliminating this way the need for an installation of the entire documentation toolchain). Download the documentation sources and build them through the ports framework (making the checkout and build steps a bit eaiser). These two methods of updating the &os; documentation are supported by a set of documentation ports, updated by the &a.doceng; on a monthly basis. These are listed in the &os; Ports Collection, under the virtual category named docs. Building and Installing Documentation Ports The documentation ports use the ports building framework to make documentation builds easier. They automate the process of checking out the documentation source, running &man.make.1; with the appropriate environment settings and command-line options, and they make the installation or deinstallation of documentation as easy as the installation of any other &os; port or package. As an extra feature, when the documentation ports are built locally, they record a dependency to the documentation toolchain ports, so the latter is automatically installed too. Organization of the documentation ports is as follows: There is a master port, misc/freebsd-doc-en, where the documentation port files can be found. It is the base of all documentation ports. By default, it builds the English documentation only. There is an all in one port, misc/freebsd-doc-all, and it builds and installs all documentation in all available languages. Finally, there is a slave port for each translation, e.g.: misc/freebsd-doc-hu for the Hungarian-language documents. All of them depend on the master port and install the translated documentation of the respective language. To install a documentation port from source, issue the following commands (as root): &prompt.root; cd /usr/ports/misc/freebsd-doc-en &prompt.root; make install clean This will build and install the English documentation in split HTML format (the same as used on ) in the /usr/local/share/doc/freebsd directory. Common Knobs and Options There are many options for modifying the default behavior of the documentation ports. The following is just a short list: WITH_HTML Allows the build of the HTML format: a single HTML file per document. The formatted documentation is saved to a file called article.html, or book.html, as appropriate, plus images. WITH_PDF Allows the build of the &adobe; Portable Document Format, for use with &adobe; &acrobat.reader;, Ghostscript or other PDF readers. The formatted documentation is saved to a file called article.pdf or book.pdf, as appropriate. DOCBASE Where to install the documentation. It defaults to /usr/local/share/doc/freebsd. Notice that the default target directory differs from the directory used by the CVSup method. This is because we are installing a port, and ports are usually installed under the /usr/local directory. This can be overridden by adding the PREFIX variable. Here is a brief example on how to use the variables mentioned above to install the Hungarian documentation in Portable Document Format: &prompt.root; cd /usr/ports/misc/freebsd-doc-hu &prompt.root; make -DWITH_PDF DOCBASE=share/doc/freebsd/hu install clean Using Documentation Packages Building the documentation ports from source, as described in the previous section, requires a local installation of the documentation toolchain and a bit of disk space for the build of the ports. When resources are not available to install the documentation toolchain, or because the build from sources would take too much disk space, it is still possible to install pre-built snapshots of the documentation ports. The &a.doceng; prepares monthly snapshots of the &os; documentation packages. These binary packages can be used with any of the bundled package tools, like &man.pkg.add.1;, &man.pkg.delete.1;, and so on. When binary packages are used, the &os; documentation will be installed in all available formats for the given language. For example, the following command will install the latest pre-built package of the Hungarian documentation: &prompt.root; pkg_add -r hu-freebsd-doc Packages have the following name format that differs from the corresponding port's name: lang-freebsd-doc. Here lang is the short format of the language code, i.e., hu for Hungarian, or zh_cn for Simplified Chinese. Updating Documentation Ports To update a previously installed documentation port, any tool suitable for updating ports is sufficient. For example, the following command updates the installed Hungarian documentation via the ports-mgmt/portupgrade tool by using packages only: &prompt.root; portupgrade -PP hu-freebsd-doc Pav Lucistnik Based on information provided by Using Docsnap Updating and Upgrading Docsnap Updating and Upgrading Docsnap is an &man.rsync.1; repository for updating installed &os; Documentation in a relatively easy and fast way. A Docsnap server tracks the documentation sources, and builds them in HTML format every hour. The textproc/docproj is unneeded with Docsnap as only patches to the built documentation exist. The only requirement for using this technique is the net/rsync port or package. To add it, use the following command: &prompt.root; pkg_add -r rsync Docsnap has been originally developed for updating documentation installed to /usr/share/doc, but the following examples could be adapted for other directories as well. For user directories, it does not require root privileges. To update the documentation set, issue the following command: &prompt.root; rsync -rltvz docsnap.sk.FreeBSD.org::docsnap /usr/share/doc There is only one Docsnap server at the moment; the docsnap.sk.FreeBSD.org shown above. Do not use the flag here as there are some items installed into /usr/share/doc during make installworld, which would accidentally be removed. To clean up, use this command instead: &prompt.root; rsync -rltvz --delete docsnap.sk.FreeBSD.org::docsnap/??_??\.\* /usr/share/doc If a subset of documentation needs to be updated, for example, the English documentation only, the following command should be used: &prompt.root; rsync -rltvz docsnap.sk.FreeBSD.org::docsnap/en_US.ISO8859-1 /usr/share/doc ]]> Tracking a Development Branch -CURRENT -STABLE There are two development branches to FreeBSD: &os.current; and &os.stable;. This section will explain a bit about each and describe how to keep your system up-to-date with each respective tree. &os.current; will be discussed first, then &os.stable;. Staying Current with &os; As you read this, keep in mind that &os.current; is the bleeding edge of &os; development. &os.current; users are expected to have a high degree of technical skill, and should be capable of solving difficult system problems on their own. If you are new to &os;, think twice before installing it. What Is &os.current;? snapshot &os.current; is the latest working sources for &os;. This includes work in progress, experimental changes, and transitional mechanisms that might or might not be present in the next official release of the software. While many &os; developers compile the &os.current; source code daily, there are periods of time when the sources are not buildable. These problems are resolved as expeditiously as possible, but whether or not &os.current; brings disaster or greatly desired functionality can be a matter of which exact moment you grabbed the source code in! Who Needs &os.current;? &os.current; is made available for 3 primary interest groups: Members of the &os; community who are actively working on some part of the source tree and for whom keeping current is an absolute requirement. Members of the &os; community who are active testers, willing to spend time solving problems in order to ensure that &os.current; remains as sane as possible. These are also people who wish to make topical suggestions on changes and the general direction of &os;, and submit patches to implement them. Those who merely wish to keep an eye on things, or to use the current sources for reference purposes (e.g., for reading, not running). These people also make the occasional comment or contribute code. What Is &os.current; <emphasis>Not</emphasis>? A fast-track to getting pre-release bits because you heard there is some cool new feature in there and you want to be the first on your block to have it. Being the first on the block to get the new feature means that you are the first on the block to get the new bugs. A quick way of getting bug fixes. Any given version of &os.current; is just as likely to introduce new bugs as to fix existing ones. In any way officially supported. We do our best to help people genuinely in one of the 3 legitimate &os.current; groups, but we simply do not have the time to provide tech support. This is not because we are mean and nasty people who do not like helping people out (we would not even be doing &os; if we were). We simply cannot answer hundreds messages a day and work on FreeBSD! Given the choice between improving &os; and answering lots of questions on experimental code, the developers opt for the former. Using &os.current; -CURRENT using Join the &a.current.name; and the &a.svn-src-head.name; lists. This is not just a good idea, it is essential. If you are not on the &a.current.name; list, you will not see the comments that people are making about the current state of the system and thus will probably end up stumbling over a lot of problems that others have already found and solved. Even more importantly, you will miss out on important bulletins which may be critical to your system's continued health. The &a.svn-src-head.name; list will allow you to see the commit log entry for each change as it is made, along with any pertinent information on possible side-effects. To join these lists, or one of the others available go to &a.mailman.lists.link; and click on the list that you wish to subscribe to. Instructions on the rest of the procedure are available there. If you are interested in tracking changes for the whole source tree, we would recommend subscribing to the &a.svn-src-all.name; list. Grab the sources from a &os; mirror site. You can do this in one of two ways: cvsup cron -CURRENT Syncing with CVSup Use the cvsup program with the supfile named standard-supfile available from /usr/share/examples/cvsup. This is the most recommended method, since it allows you to grab the entire collection once and then only what has changed from then on. Many people run cvsup from cron and keep their sources up-to-date automatically. You have to customize the sample supfile above, and configure cvsup for your environment. The sample standard-supfile is intended for tracking a specific security branch of &os;, and not &os.current;. You will need to edit this file and replace the following line: *default release=cvs tag=RELENG_X_Y With this one: *default release=cvs tag=. For a detailed explanation of usable tags, please refer to the Handbook's CVS Tags section. -CURRENT Syncing with CTM Use the CTM facility. If you have very bad connectivity (high price connections or only email access) CTM is an option. However, it is a lot of hassle and can give you broken files. This leads to it being rarely used, which again increases the chance of it not working for fairly long periods of time. We recommend using CVSup for anybody with a 9600 bps modem or faster connection. If you are grabbing the sources to run, and not just look at, then grab all of &os.current;, not just selected portions. The reason for this is that various parts of the source depend on updates elsewhere, and trying to compile just a subset is almost guaranteed to get you into trouble. -CURRENT compiling Before compiling &os.current;, read the Makefile in /usr/src carefully. You should at least install a new kernel and rebuild the world the first time through as part of the upgrading process. Reading the &a.current; and /usr/src/UPDATING will keep you up-to-date on other bootstrapping procedures that sometimes become necessary as we move toward the next release. Be active! If you are running &os.current;, we want to know what you have to say about it, especially if you have suggestions for enhancements or bug fixes. Suggestions with accompanying code are received most enthusiastically! Staying Stable with &os; What Is &os.stable;? -STABLE &os.stable; is our development branch from which major releases are made. Changes go into this branch at a different pace, and with the general assumption that they have first gone into &os.current; for testing. This is still a development branch, however, and this means that at any given time, the sources for &os.stable; may or may not be suitable for any particular purpose. It is simply another engineering development track, not a resource for end-users. Who Needs &os.stable;? If you are interested in tracking or contributing to the FreeBSD development process, especially as it relates to the next point release of FreeBSD, then you should consider following &os.stable;. While it is true that security fixes also go into the &os.stable; branch, you do not need to track &os.stable; to do this. Every security advisory for FreeBSD explains how to fix the problem for the releases it affects That is not quite true. We can not continue to support old releases of FreeBSD forever, although we do support them for many years. For a complete description of the current security policy for old releases of FreeBSD, please see http://www.FreeBSD.org/security/. , and tracking an entire development branch just for security reasons is likely to bring in a lot of unwanted changes as well. Although we endeavor to ensure that the &os.stable; branch compiles and runs at all times, this cannot be guaranteed. In addition, while code is developed in &os.current; before including it in &os.stable;, more people run &os.stable; than &os.current;, so it is inevitable that bugs and corner cases will sometimes be found in &os.stable; that were not apparent in &os.current;. For these reasons, we do not recommend that you blindly track &os.stable;, and it is particularly important that you do not update any production servers to &os.stable; without first thoroughly testing the code in your development environment. If you do not have the resources to do this then we recommend that you run the most recent release of FreeBSD, and use the binary update mechanism to move from release to release. Using &os.stable; -STABLE using Join the &a.stable.name; list. This will keep you informed of build-dependencies that may appear in &os.stable; or any other issues requiring special attention. Developers will also make announcements in this mailing list when they are contemplating some controversial fix or update, giving the users a chance to respond if they have any issues to raise concerning the proposed change. Join the relevant SVN list for the branch you are tracking. For example, if you are tracking the 7-STABLE branch, join the &a.svn-src-stable-7.name; list. This will allow you to view the commit log entry for each change as it is made, along with any pertinent information on possible side-effects. To join these lists, or one of the others available go to &a.mailman.lists.link; and click on the list that you wish to subscribe to. Instructions on the rest of the procedure are available there. If you are interested in tracking changes for the whole source tree, we would recommend subscribing to the &a.svn-src-all.name; list. If you are going to install a new system and want it to run monthly snapshot built from &os.stable;, please check the Snapshots web page for more information. Alternatively, it is possible to install the most recent &os.stable; release from the mirror sites and follow the instructions below to upgrade your system to the most up to date &os.stable; source code. If you are already running a previous release of &os; and wish to upgrade via sources then you can easily do so from &os; mirror site. This can be done in one of two ways: cvsup cron -STABLE syncing with CVSup Use the cvsup program with the supfile named stable-supfile from the directory /usr/share/examples/cvsup. This is the most recommended method, since it allows you to grab the entire collection once and then only what has changed from then on. Many people run cvsup from cron to keep their sources up-to-date automatically. You have to customize the sample supfile above, and configure cvsup for your environment. -STABLE syncing with CTM Use the CTM facility. If you do not have a fast and inexpensive connection to the Internet, this is the method you should consider using. Essentially, if you need rapid on-demand access to the source and communications bandwidth is not a consideration, use cvsup or ftp. Otherwise, use CTM. -STABLE compiling Before compiling &os.stable;, read the Makefile in /usr/src carefully. You should at least install a new kernel and rebuild the world the first time through as part of the upgrading process. Reading the &a.stable; and /usr/src/UPDATING will keep you up-to-date on other bootstrapping procedures that sometimes become necessary as we move toward the next release. Synchronizing Your Source There are various ways of using an Internet (or email) connection to stay up-to-date with any given area of the &os; project sources, or all areas, depending on what interests you. The primary services we offer are Anonymous CVS, CVSup, and CTM. While it is possible to update only parts of your source tree, the only supported update procedure is to update the entire tree and recompile both userland (i.e., all the programs that run in user space, such as those in /bin and /sbin) and kernel sources. Updating only part of your source tree, only the kernel, or only userland will often result in problems. These problems may range from compile errors to kernel panics or data corruption. CVS anonymous Anonymous CVS and CVSup use the pull model of updating sources. In the case of CVSup the user (or a cron script) invokes the cvsup program, and it interacts with a cvsupd server somewhere to bring your files up-to-date. The updates you receive are up-to-the-minute and you get them when, and only when, you want them. You can easily restrict your updates to the specific files or directories that are of interest to you. Updates are generated on the fly by the server, according to what you have and what you want to have. Anonymous CVS is quite a bit more simplistic than CVSup in that it is just an extension to CVS which allows it to pull changes directly from a remote CVS repository. CVSup can do this far more efficiently, but Anonymous CVS is easier to use. CTM CTM, on the other hand, does not interactively compare the sources you have with those on the master archive or otherwise pull them across. Instead, a script which identifies changes in files since its previous run is executed several times a day on the master CTM machine, any detected changes being compressed, stamped with a sequence-number and encoded for transmission over email (in printable ASCII only). Once received, these CTM deltas can then be handed to the &man.ctm.rmail.1; utility which will automatically decode, verify and apply the changes to the user's copy of the sources. This process is far more efficient than CVSup, and places less strain on our server resources since it is a push rather than a pull model. There are other trade-offs, of course. If you inadvertently wipe out portions of your archive, CVSup will detect and rebuild the damaged portions for you. CTM will not do this, and if you wipe some portion of your source tree out (and do not have it backed up) then you will have to start from scratch (from the most recent CVS base delta) and rebuild it all with CTM or, with Anonymous CVS, simply delete the bad bits and resync. Rebuilding <quote>world</quote> Rebuilding world Once you have synchronized your local source tree against a particular version of &os; (&os.stable;, &os.current;, and so on) you can then use the source tree to rebuild the system. Make a Backup It cannot be stressed enough how important it is to make a backup of your system before you do this. While rebuilding the world is (as long as you follow these instructions) an easy task to do, there will inevitably be times when you make mistakes, or when mistakes made by others in the source tree render your system unbootable. Make sure you have taken a backup. And have a fixit floppy or bootable CD at hand. You will probably never have to use it, but it is better to be safe than sorry! Subscribe to the Right Mailing List mailing list The &os.stable; and &os.current; branches are, by their nature, in development. People that contribute to &os; are human, and mistakes occasionally happen. Sometimes these mistakes can be quite harmless, just causing your system to print a new diagnostic warning. Or the change may be catastrophic, and render your system unbootable or destroy your file systems (or worse). If problems like these occur, a heads up is posted to the appropriate mailing list, explaining the nature of the problem and which systems it affects. And an all clear announcement is posted when the problem has been solved. If you try to track &os.stable; or &os.current; and do not read the &a.stable; or the &a.current; respectively, then you are asking for trouble. Do not use <command>make world</command> A lot of older documentation recommends using make world for this. Doing that skips some important steps and should only be used if you are sure of what you are doing. For almost all circumstances make world is the wrong thing to do, and the procedure described here should be used instead. The Canonical Way to Update Your System To update your system, you should check /usr/src/UPDATING for any pre-buildworld steps necessary for your version of the sources and then use the procedure outlined here. These upgrade steps assume that you are currently using an old &os; version, consisting of an old compiler, old kernel, old world and old configuration files. By world here we mean the core system binaries, libraries and programming files. The compiler is part of world, but has a few special concerns. We also assume that you have already obtained the sources to a newer system. If the sources available on the particular system are old too, see for detailed help about synchronizing them to a newer version. Updating the system from sources is a bit more subtle than it might initially seem to be, and the &os; developers have found it necessary over the years to change the recommended approach fairly dramatically as new kinds of unavoidable dependencies come to light. The rest of this section describes the rationale behind the currently recommended upgrade sequence. Any successful update sequence must deal with the following issues: The old compiler might not be able to compile the new kernel. (Old compilers sometimes have bugs.) So, the new kernel should be built with the new compiler. In particular, the new compiler must be built before the new kernel is built. This does not necessarily mean that the new compiler must be installed before building the new kernel. The new world might rely on new kernel features. So, the new kernel must be installed before the new world is installed. These first two issues are the basis for the core buildworld, buildkernel, installkernel, installworld sequence that we describe in the following paragraphs. This is not an exhaustive list of all the reasons why you should prefer the currently recommended upgrade process. Some of the less obvious ones are listed below: The old world might not run correctly on the new kernel, so you must install the new world immediately upon installing the new kernel. Some configuration changes must be done before the new world is installed, but others might break the old world. Hence, two different configuration upgrade steps are generally needed. For the most part, the update process only replaces or adds files; existing old files are not deleted. In a few cases, this can cause problems. As a result, the update procedure will sometimes specify certain files that should be manually deleted at certain steps. This may or may not be automated in the future. These concerns have led to the following recommended sequence. Note that the detailed sequence for particular updates may require additional steps, but this core process should remain unchanged for some time: make buildworld This first compiles the new compiler and a few related tools, then uses the new compiler to compile the rest of the new world. The result ends up in /usr/obj. make buildkernel Unlike the older approach, using &man.config.8; and &man.make.1;, this uses the new compiler residing in /usr/obj. This protects you against compiler-kernel mismatches. make installkernel Place the new kernel and kernel modules onto the disk, making it possible to boot with the newly updated kernel. Reboot into single user mode. Single user mode minimizes problems from updating software that's already running. It also minimizes any problems from running the old world on a new kernel. mergemaster This does some initial configuration file updates in preparation for the new world. For instance it may add new user groups to the system, or new user names to the password database. This is often necessary when new groups or special system-user accounts have been added since the last update, so that the installworld step will be able to use the newly installed system user or system group names without problems. make installworld Copies the world from /usr/obj. You now have a new kernel and new world on disk. mergemaster Now you can update the remaining configuration files, since you have a new world on disk. Reboot. A full machine reboot is needed now to load the new kernel and new world with new configuration files. Note that if you're upgrading from one release of the same &os; branch to a more recent release of the same branch, i.e., from 7.0 to 7.1, then this procedure may not be absolutely necessary, since you're unlikely to run into serious mismatches between compiler, kernel, userland and configuration files. The older approach of make world followed by building and installing a new kernel might work well enough for minor updates. But, when upgrading across major releases, people who don't follow this procedure should expect some problems. It is also worth noting that many upgrades (i.e., 4.X to 5.0) may require specific additional steps (renaming or deleting specific files prior to installworld, for instance). Read the /usr/src/UPDATING file carefully, especially at the end, where the currently recommended upgrade sequence is explicitly spelled out. This procedure has evolved over time as the developers have found it impossible to completely prevent certain kinds of mismatch problems. Hopefully, the current procedure will remain stable for a long time. To summarize, the currently recommended way of upgrading &os; from sources is: &prompt.root; cd /usr/src &prompt.root; make buildworld &prompt.root; make buildkernel &prompt.root; make installkernel &prompt.root; shutdown -r now There are a few rare cases when an extra run of mergemaster -p is needed before the buildworld step. These are described in UPDATING. In general, though, you can safely omit this step if you are not updating across one or more major &os; versions. After installkernel finishes successfully, you should boot in single user mode (i.e., using boot -s from the loader prompt). Then run: &prompt.root; mount -u / &prompt.root; mount -a -t ufs &prompt.root; adjkerntz -i &prompt.root; mergemaster -p &prompt.root; cd /usr/src &prompt.root; make installworld &prompt.root; mergemaster &prompt.root; reboot Read Further Explanations The sequence described above is only a short resume to help you getting started. You should however read the following sections to clearly understand each step, especially if you want to use a custom kernel configuration. Read <filename>/usr/src/UPDATING</filename> Before you do anything else, read /usr/src/UPDATING (or the equivalent file wherever you have a copy of the source code). This file should contain important information about problems you might encounter, or specify the order in which you might have to run certain commands. If UPDATING contradicts something you read here, UPDATING takes precedence. Reading UPDATING is not an acceptable substitute for subscribing to the correct mailing list, as described previously. The two requirements are complementary, not exclusive. Check <filename>/etc/make.conf</filename> make.conf Examine the files /usr/share/examples/etc/make.conf and /etc/make.conf. The first contains some default defines – most of which are commented out. To make use of them when you rebuild your system from source, add them to /etc/make.conf. Keep in mind that anything you add to /etc/make.conf is also used every time you run make, so it is a good idea to set them to something sensible for your system. A typical user will probably want to copy the CFLAGS and NO_PROFILE lines found in /usr/share/examples/etc/make.conf to /etc/make.conf and uncomment them. Examine the other definitions (COPTFLAGS, NOPORTDOCS and so on) and decide if they are relevant to you. Update the Files in <filename>/etc</filename> The /etc directory contains a large part of your system's configuration information, as well as scripts that are run at system startup. Some of these scripts change from version to version of FreeBSD. Some of the configuration files are also used in the day to day running of the system. In particular, /etc/group. There have been occasions when the installation part of make installworld has expected certain usernames or groups to exist. When performing an upgrade it is likely that these users or groups did not exist. This caused problems when upgrading. In some cases make buildworld will check to see if these users or groups exist. An example of this is when the smmsp user was added. Users had the installation process fail for them when &man.mtree.8; was trying to create /var/spool/clientmqueue. The solution is to run &man.mergemaster.8; in pre-buildworld mode by providing the option. This will compare only those files that are essential for the success of buildworld or installworld. If you are feeling particularly paranoid, you can check your system to see which files are owned by the group you are renaming or deleting: &prompt.root; find / -group GID -print will show all files owned by group GID (which can be either a group name or a numeric group ID). Drop to Single User Mode single-user mode You may want to compile the system in single user mode. Apart from the obvious benefit of making things go slightly faster, reinstalling the system will touch a lot of important system files, all the standard system binaries, libraries, include files and so on. Changing these on a running system (particularly if you have active users on the system at the time) is asking for trouble. multi-user mode Another method is to compile the system in multi-user mode, and then drop into single user mode for the installation. If you would like to do it this way, simply hold off on the following steps until the build has completed. You can postpone dropping to single user mode until you have to installkernel or installworld. As the superuser, you can execute: &prompt.root; shutdown now from a running system, which will drop it to single user mode. Alternatively, reboot the system, and at the boot prompt, select the single user option. The system will then boot single user. At the shell prompt you should then run: &prompt.root; fsck -p &prompt.root; mount -u / &prompt.root; mount -a -t ufs &prompt.root; swapon -a This checks the file systems, remounts / read/write, mounts all the other UFS file systems referenced in /etc/fstab and then turns swapping on. If your CMOS clock is set to local time and not to GMT (this is true if the output of the &man.date.1; command does not show the correct time and zone), you may also need to run the following command: &prompt.root; adjkerntz -i This will make sure that your local time-zone settings get set up correctly — without this, you may later run into some problems. Remove <filename>/usr/obj</filename> As parts of the system are rebuilt they are placed in directories which (by default) go under /usr/obj. The directories shadow those under /usr/src. You can speed up the make buildworld process, and possibly save yourself some dependency headaches by removing this directory as well. Some files below /usr/obj may have the immutable flag set (see &man.chflags.1; for more information) which must be removed first. &prompt.root; cd /usr/obj &prompt.root; chflags -R noschg * &prompt.root; rm -rf * Recompile the Base System Saving the Output It is a good idea to save the output you get from running &man.make.1; to another file. If something goes wrong you will have a copy of the error message. While this might not help you in diagnosing what has gone wrong, it can help others if you post your problem to one of the &os; mailing lists. The easiest way to do this is to use the &man.script.1; command, with a parameter that specifies the name of the file to save all output to. You would do this immediately before rebuilding the world, and then type exit when the process has finished. &prompt.root; script /var/tmp/mw.out Script started, output file is /var/tmp/mw.out &prompt.root; make TARGET … compile, compile, compile … &prompt.root; exit Script done, … If you do this, do not save the output in /tmp. This directory may be cleared next time you reboot. A better place to store it is in /var/tmp (as in the previous example) or in root's home directory. Compile the Base System You must be in the /usr/src directory: &prompt.root; cd /usr/src (unless, of course, your source code is elsewhere, in which case change to that directory instead). make To rebuild the world you use the &man.make.1; command. This command reads instructions from the Makefile, which describes how the programs that comprise &os; should be rebuilt, the order in which they should be built, and so on. The general format of the command line you will type is as follows: &prompt.root; make -x -DVARIABLE target In this example, is an option that you would pass to &man.make.1;. See the &man.make.1; manual page for an example of the options you can pass. passes a variable to the Makefile. The behavior of the Makefile is controlled by these variables. These are the same variables as are set in /etc/make.conf, and this provides another way of setting them. &prompt.root; make -DNO_PROFILE target is another way of specifying that profiled libraries should not be built, and corresponds with the NO_PROFILE= true # Avoid compiling profiled libraries line in /etc/make.conf. target tells &man.make.1; what you want to do. Each Makefile defines a number of different targets, and your choice of target determines what happens. Some targets are listed in the Makefile, but are not meant for you to run. Instead, they are used by the build process to break out the steps necessary to rebuild the system into a number of sub-steps. Most of the time you will not need to pass any parameters to &man.make.1;, and so your command like will look like this: &prompt.root; make target Where target will be one of many build options. The first target should always be buildworld. As the names imply, buildworld builds a complete new tree under /usr/obj, and installworld, another target, installs this tree on the current machine. Having separate options is very useful for two reasons. First, it allows you to do the build safe in the knowledge that no components of your running system will be affected. The build is self hosted. Because of this, you can safely run buildworld on a machine running in multi-user mode with no fear of ill-effects. It is still recommended that you run the installworld part in single user mode, though. Secondly, it allows you to use NFS mounts to upgrade multiple machines on your network. If you have three machines, A, B and C that you want to upgrade, run make buildworld and make installworld on A. B and C should then NFS mount /usr/src and /usr/obj from A, and you can then run make installworld to install the results of the build on B and C. Although the world target still exists, you are strongly encouraged not to use it. Run &prompt.root; make buildworld It is possible to specify a option to make which will cause it to spawn several simultaneous processes. This is most useful on multi-CPU machines. However, since much of the compiling process is IO bound rather than CPU bound it is also useful on single CPU machines. On a typical single-CPU machine you would run: &prompt.root; make -j4 buildworld &man.make.1; will then have up to 4 processes running at any one time. Empirical evidence posted to the mailing lists shows this generally gives the best performance benefit. If you have a multi-CPU machine and you are using an SMP configured kernel try values between 6 and 10 and see how they speed things up. Timings rebuilding world timings Many factors influence the build time, but fairly recent machines may only take a one or two hours to build the &os.stable; tree, with no tricks or shortcuts used during the process. A &os.current; tree will take somewhat longer. Compile and Install a New Kernel kernel compiling To take full advantage of your new system you should recompile the kernel. This is practically a necessity, as certain memory structures may have changed, and programs like &man.ps.1; and &man.top.1; will fail to work until the kernel and source code versions are the same. The simplest, safest way to do this is to build and install a kernel based on GENERIC. While GENERIC may not have all the necessary devices for your system, it should contain everything necessary to boot your system back to single user mode. This is a good test that the new system works properly. After booting from GENERIC and verifying that your system works you can then build a new kernel based on your normal kernel configuration file. On &os; it is important to build world before building a new kernel. If you want to build a custom kernel, and already have a configuration file, just use KERNCONF=MYKERNEL like this: &prompt.root; cd /usr/src &prompt.root; make buildkernel KERNCONF=MYKERNEL &prompt.root; make installkernel KERNCONF=MYKERNEL Note that if you have raised kern.securelevel above 1 and you have set either the noschg or similar flags to your kernel binary, you might find it necessary to drop into single user mode to use installkernel. Otherwise you should be able to run both these commands from multi user mode without problems. See &man.init.8; for details about kern.securelevel and &man.chflags.1; for details about the various file flags. Reboot into Single User Mode single-user mode You should reboot into single user mode to test the new kernel works. Do this by following the instructions in . Install the New System Binaries You should now use installworld to install the new system binaries. Run &prompt.root; cd /usr/src &prompt.root; make installworld If you specified variables on the make buildworld command line, you must specify the same variables in the make installworld command line. This does not necessarily hold true for other options; for example, must never be used with installworld. For example, if you ran: &prompt.root; make -DNO_PROFILE buildworld you must install the results with: &prompt.root; make -DNO_PROFILE installworld otherwise it would try to install profiled libraries that had not been built during the make buildworld phase. Update Files Not Updated by <command>make installworld</command> Remaking the world will not update certain directories (in particular, /etc, /var and /usr) with new or changed configuration files. The simplest way to update these files is to use &man.mergemaster.8;, though it is possible to do it manually if you would prefer to do that. Regardless of which way you choose, be sure to make a backup of /etc in case anything goes wrong. Tom Rhodes Contributed by <command>mergemaster</command> mergemaster The &man.mergemaster.8; utility is a Bourne script that will aid you in determining the differences between your configuration files in /etc, and the configuration files in the source tree /usr/src/etc. This is the recommended solution for keeping the system configuration files up to date with those located in the source tree. To begin simply type mergemaster at your prompt, and watch it start going. mergemaster will then build a temporary root environment, from / down, and populate it with various system configuration files. Those files are then compared to the ones currently installed in your system. At this point, files that differ will be shown in &man.diff.1; format, with the sign representing added or modified lines, and representing lines that will be either removed completely, or replaced with a new line. See the &man.diff.1; manual page for more information about the &man.diff.1; syntax and how file differences are shown. &man.mergemaster.8; will then show you each file that displays variances, and at this point you will have the option of either deleting the new file (referred to as the temporary file), installing the temporary file in its unmodified state, merging the temporary file with the currently installed file, or viewing the &man.diff.1; results again. Choosing to delete the temporary file will tell &man.mergemaster.8; that we wish to keep our current file unchanged, and to delete the new version. This option is not recommended, unless you see no reason to change the current file. You can get help at any time by typing ? at the &man.mergemaster.8; prompt. If the user chooses to skip a file, it will be presented again after all other files have been dealt with. Choosing to install the unmodified temporary file will replace the current file with the new one. For most unmodified files, this is the best option. Choosing to merge the file will present you with a text editor, and the contents of both files. You can now merge them by reviewing both files side by side on the screen, and choosing parts from both to create a finished product. When the files are compared side by side, the l key will select the left contents and the r key will select contents from your right. The final output will be a file consisting of both parts, which can then be installed. This option is customarily used for files where settings have been modified by the user. Choosing to view the &man.diff.1; results again will show you the file differences just like &man.mergemaster.8; did before prompting you for an option. After &man.mergemaster.8; is done with the system files you will be prompted for other options. &man.mergemaster.8; may ask if you want to rebuild the password file and will finish up with an option to remove left-over temporary files. Manual Update If you wish to do the update manually, however, you cannot just copy over the files from /usr/src/etc to /etc and have it work. Some of these files must be installed first. This is because the /usr/src/etc directory is not a copy of what your /etc directory should look like. In addition, there are files that should be in /etc that are not in /usr/src/etc. If you are using &man.mergemaster.8; (as recommended), you can skip forward to the next section. The simplest way to do this by hand is to install the files into a new directory, and then work through them looking for differences. Backup Your Existing <filename>/etc</filename> Although, in theory, nothing is going to touch this directory automatically, it is always better to be sure. So copy your existing /etc directory somewhere safe. Something like: &prompt.root; cp -Rp /etc /etc.old does a recursive copy, preserves times, ownerships on files and suchlike. You need to build a dummy set of directories to install the new /etc and other files into. /var/tmp/root is a reasonable choice, and there are a number of subdirectories required under this as well. &prompt.root; mkdir /var/tmp/root &prompt.root; cd /usr/src/etc &prompt.root; make DESTDIR=/var/tmp/root distrib-dirs distribution This will build the necessary directory structure and install the files. A lot of the subdirectories that have been created under /var/tmp/root are empty and should be deleted. The simplest way to do this is to: &prompt.root; cd /var/tmp/root &prompt.root; find -d . -type d | xargs rmdir 2>/dev/null This will remove all empty directories. (Standard error is redirected to /dev/null to prevent the warnings about the directories that are not empty.) /var/tmp/root now contains all the files that should be placed in appropriate locations below /. You now have to go through each of these files, determining how they differ with your existing files. Note that some of the files that will have been installed in /var/tmp/root have a leading .. At the time of writing the only files like this are shell startup files in /var/tmp/root/ and /var/tmp/root/root/, although there may be others (depending on when you are reading this). Make sure you use ls -a to catch them. The simplest way to do this is to use &man.diff.1; to compare the two files: &prompt.root; diff /etc/shells /var/tmp/root/etc/shells This will show you the differences between your /etc/shells file and the new /var/tmp/root/etc/shells file. Use these to decide whether to merge in changes that you have made or whether to copy over your old file. Name the New Root Directory (<filename>/var/tmp/root</filename>) with a Time Stamp, so You Can Easily Compare Differences Between Versions Frequently rebuilding the world means that you have to update /etc frequently as well, which can be a bit of a chore. You can speed this process up by keeping a copy of the last set of changed files that you merged into /etc. The following procedure gives one idea of how to do this. Make the world as normal. When you want to update /etc and the other directories, give the target directory a name based on the current date. If you were doing this on the 14th of February 1998 you could do the following: &prompt.root; mkdir /var/tmp/root-19980214 &prompt.root; cd /usr/src/etc &prompt.root; make DESTDIR=/var/tmp/root-19980214 \ distrib-dirs distribution Merge in the changes from this directory as outlined above. Do not remove the /var/tmp/root-19980214 directory when you have finished. When you have downloaded the latest version of the source and remade it, follow step 1. This will give you a new directory, which might be called /var/tmp/root-19980221 (if you wait a week between doing updates). You can now see the differences that have been made in the intervening week using &man.diff.1; to create a recursive diff between the two directories: &prompt.root; cd /var/tmp &prompt.root; diff -r root-19980214 root-19980221 Typically, this will be a much smaller set of differences than those between /var/tmp/root-19980221/etc and /etc. Because the set of differences is smaller, it is easier to migrate those changes across into your /etc directory. You can now remove the older of the two /var/tmp/root-* directories: &prompt.root; rm -rf /var/tmp/root-19980214 Repeat this process every time you need to merge in changes to /etc. You can use &man.date.1; to automate the generation of the directory names: &prompt.root; mkdir /var/tmp/root-`date "+%Y%m%d"` Rebooting You are now done. After you have verified that everything appears to be in the right place you can reboot the system. A simple &man.shutdown.8; should do it: &prompt.root; shutdown -r now Finished You should now have successfully upgraded your &os; system. Congratulations. If things went slightly wrong, it is easy to rebuild a particular piece of the system. For example, if you accidentally deleted /etc/magic as part of the upgrade or merge of /etc, the &man.file.1; command will stop working. In this case, the fix would be to run: &prompt.root; cd /usr/src/usr.bin/file &prompt.root; make all install Questions Do I need to re-make the world for every change? There is no easy answer to this one, as it depends on the nature of the change. For example, if you just ran CVSup, and it has shown the following files as being updated: src/games/cribbage/instr.c src/games/sail/pl_main.c src/release/sysinstall/config.c src/release/sysinstall/media.c src/share/mk/bsd.port.mk it probably is not worth rebuilding the entire world. You could just go to the appropriate sub-directories and make all install, and that's about it. But if something major changed, for example src/lib/libc/stdlib then you should either re-make the world, or at least those parts of it that are statically linked (as well as anything else you might have added that is statically linked). At the end of the day, it is your call. You might be happy re-making the world every fortnight say, and let changes accumulate over that fortnight. Or you might want to re-make just those things that have changed, and be confident you can spot all the dependencies. And, of course, this all depends on how often you want to upgrade, and whether you are tracking &os.stable; or &os.current;. My compile failed with lots of signal 11 (or other signal number) errors. What has happened? signal 11 This is normally indicative of hardware problems. (Re)making the world is an effective way to stress test your hardware, and will frequently throw up memory problems. These normally manifest themselves as the compiler mysteriously dying on receipt of strange signals. A sure indicator of this is if you can restart the make and it dies at a different point in the process. In this instance there is little you can do except start swapping around the components in your machine to determine which one is failing. Can I remove /usr/obj when I have finished? The short answer is yes. /usr/obj contains all the object files that were produced during the compilation phase. Normally, one of the first steps in the make buildworld process is to remove this directory and start afresh. In this case, keeping /usr/obj around after you have finished makes little sense, and will free up a large chunk of disk space (currently about 2 GB). However, if you know what you are doing you can have make buildworld skip this step. This will make subsequent builds run much faster, since most of sources will not need to be recompiled. The flip side of this is that subtle dependency problems can creep in, causing your build to fail in odd ways. This frequently generates noise on the &os; mailing lists, when one person complains that their build has failed, not realizing that it is because they have tried to cut corners. Can interrupted builds be resumed? This depends on how far through the process you got before you found a problem. In general (and this is not a hard and fast rule) the make buildworld process builds new copies of essential tools (such as &man.gcc.1;, and &man.make.1;) and the system libraries. These tools and libraries are then installed. The new tools and libraries are then used to rebuild themselves, and are installed again. The entire system (now including regular user programs, such as &man.ls.1; or &man.grep.1;) is then rebuilt with the new system files. If you are at the last stage, and you know it (because you have looked through the output that you were storing) then you can (fairly safely) do: … fix the problem … &prompt.root; cd /usr/src &prompt.root; make -DNO_CLEAN all This will not undo the work of the previous make buildworld. If you see the message: -------------------------------------------------------------- Building everything.. -------------------------------------------------------------- in the make buildworld output then it is probably fairly safe to do so. If you do not see that message, or you are not sure, then it is always better to be safe than sorry, and restart the build from scratch. How can I speed up making the world? Run in single user mode. Put the /usr/src and /usr/obj directories on separate file systems held on separate disks. If possible, put these disks on separate disk controllers. Better still, put these file systems across multiple disks using the &man.ccd.4; (concatenated disk driver) device. Turn off profiling (set NO_PROFILE=true in /etc/make.conf). You almost certainly do not need it. Also in /etc/make.conf, set CFLAGS to something like . The optimization is much slower, and the optimization difference between and is normally negligible. lets the compiler use pipes rather than temporary files for communication, which saves disk access (at the expense of memory). Pass the option to &man.make.1; to run multiple processes in parallel. This usually helps regardless of whether you have a single or a multi processor machine. The file system holding /usr/src can be mounted (or remounted) with the option. This prevents the file system from recording the file access time. You probably do not need this information anyway. &prompt.root; mount -u -o noatime /usr/src The example assumes /usr/src is on its own file system. If it is not (if it is a part of /usr for example) then you will need to use that file system mount point, and not /usr/src. The file system holding /usr/obj can be mounted (or remounted) with the option. This causes disk writes to happen asynchronously. In other words, the write completes immediately, and the data is written to the disk a few seconds later. This allows writes to be clustered together, and can be a dramatic performance boost. Keep in mind that this option makes your file system more fragile. With this option there is an increased chance that, should power fail, the file system will be in an unrecoverable state when the machine restarts. If /usr/obj is the only thing on this file system then it is not a problem. If you have other, valuable data on the same file system then ensure your backups are fresh before you enable this option. &prompt.root; mount -u -o async /usr/obj As above, if /usr/obj is not on its own file system, replace it in the example with the name of the appropriate mount point. What do I do if something goes wrong? Make absolutely sure your environment has no extraneous cruft from earlier builds. This is simple enough. &prompt.root; chflags -R noschg /usr/obj/usr &prompt.root; rm -rf /usr/obj/usr &prompt.root; cd /usr/src &prompt.root; make cleandir &prompt.root; make cleandir Yes, make cleandir really should be run twice. Then restart the whole process, starting with make buildworld. If you still have problems, send the error and the output of uname -a to &a.questions;. Be prepared to answer other questions about your setup! Anton Shterenlikht Based on notes provided by Deleting obsolete files, directories and libraries Deleting obsolete files, directories and libraries As a part of the &os; development lifecycle, it happens from time to time that files and their contents become obsolete. This may be because their functionality is implemented elsewhere, the version number of the library has changed or it was removed from the system entirely. This includes old files, libraries and directories, which should be removed when updating the system. The benefit for the user is that the system is not cluttered with old files which take up unnecessary space on the storage (and backup) medium. Additionally, if the old library had a security or stability issue, you should update to the newer library to keep your system safe and prevent crashes caused by the old library implementation. The files, directories, and libraries that are considered obsolete are listed in /usr/src/ObsoleteFiles.inc. The following instructions will help you removing these obsolete files during the system upgrade process. We assume you are following the steps outlined in . After the make installworld and the subsequent mergemaster commands have finished successfully, you should check for obsolete files and libraries as follows: &prompt.root; cd /usr/src &prompt.root; make check-old If any obsolete files are found, they can be deleted using the following commands: &prompt.root; make delete-old See /usr/src/Makefile for more targets of interest. A prompt is displayed before deleting each obsolete file. You can skip the prompt and let the system remove these files automatically by using the BATCH_DELETE_OLD_FILES make-variable as follows: &prompt.root; make -DBATCH_DELETE_OLD_FILES delete-old You can also achieve the same goal by piping these commands through yes like this: &prompt.root; yes|make delete-old Warning Deleting obsolete files will break applications that still depend on those obsolete files. This is especially true for old libraries. In most cases, you need to recompile the programs, ports, or libraries that used the old library before make delete-old-libs is executed. Utilities for checking shared library dependencies are available from the Ports Collection in sysutils/libchk or sysutils/bsdadminscripts. Obsolete shared libraries can conflict with newer libraries, causing messages like these: /usr/bin/ld: warning: libz.so.4, needed by /usr/local/lib/libtiff.so, may conflict with libz.so.5 /usr/bin/ld: warning: librpcsvc.so.4, needed by /usr/local/lib/libXext.so, may conflict with librpcsvc.so.5 To solve these problems, determine which port installed the library: &prompt.root; pkg_info -W /usr/local/lib/libtiff.so /usr/local/lib/libtiff.so was installed by package tiff-3.9.4 &prompt.root; pkg_info -W /usr/local/lib/libXext.so /usr/local/lib/libXext.so was installed by package libXext-1.1.1,1 Then deinstall, rebuild and reinstall the port. The ports-mgmt/portmaster and ports-mgmt/portupgrade utilities can be used to automate this process. After you've made sure that all ports are rebuilt and do not use the old libraries anymore, you can delete them using the following command: &prompt.root; make delete-old-libs Mike Meyer Contributed by Tracking for Multiple Machines NFS installing multiple machines If you have multiple machines that you want to track the same source tree, then having all of them download sources and rebuild everything seems like a waste of resources: disk space, network bandwidth, and CPU cycles. It is, and the solution is to have one machine do most of the work, while the rest of the machines mount that work via NFS. This section outlines a method of doing so. Preliminaries First, identify a set of machines that is going to run the same set of binaries, which we will call a build set. Each machine can have a custom kernel, but they will be running the same userland binaries. From that set, choose a machine to be the build machine. It is going to be the machine that the world and kernel are built on. Ideally, it should be a fast machine that has sufficient spare CPU to run make buildworld and make buildkernel. You will also want to choose a machine to be the test machine, which will test software updates before they are put into production. This must be a machine that you can afford to have down for an extended period of time. It can be the build machine, but need not be. All the machines in this build set need to mount /usr/obj and /usr/src from the same machine, and at the same point. Ideally, those are on two different drives on the build machine, but they can be NFS mounted on that machine as well. If you have multiple build sets, /usr/src should be on one build machine, and NFS mounted on the rest. Finally make sure that /etc/make.conf and /etc/src.conf on all the machines in the build set agrees with the build machine. That means that the build machine must build all the parts of the base system that any machine in the build set is going to install. Also, each build machine should have its kernel name set with KERNCONF in /etc/make.conf, and the build machine should list them all in KERNCONF, listing its own kernel first. The build machine must have the kernel configuration files for each machine in /usr/src/sys/arch/conf if it is going to build their kernels. The Base System Now that all that is done, you are ready to build everything. Build the kernel and world as described in on the build machine, but do not install anything. After the build has finished, go to the test machine, and install the kernel you just built. If this machine mounts /usr/src and /usr/obj via NFS, when you reboot to single user you will need to enable the network and mount them. The easiest way to do this is to boot to multi-user, then run shutdown now to go to single user mode. Once there, you can install the new kernel and world and run mergemaster just as you normally would. When done, reboot to return to normal multi-user operations for this machine. After you are certain that everything on the test machine is working properly, use the same procedure to install the new software on each of the other machines in the build set. Ports The same ideas can be used for the ports tree. The first critical step is mounting /usr/ports from the same machine to all the machines in the build set. You can then set up /etc/make.conf properly to share distfiles. You should set DISTDIR to a common shared directory that is writable by whichever user root is mapped to by your NFS mounts. Each machine should set WRKDIRPREFIX to a local build directory. Finally, if you are going to be building and distributing packages, you should set PACKAGES to a directory similar to DISTDIR.