aboutsummaryrefslogtreecommitdiff
path: root/documentation/content/en/books/handbook/virtualization/_index.adoc
blob: f7ed3fab3e44bc34ffa15f3b7a1e15517763cc2a (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
---
title: Chapter 23. Virtualization
part: Part III. System Administration
prev: books/handbook/filesystems
next: books/handbook/l10n
description: Virtualization software allows multiple operating systems to run simultaneously on the same computer
tags: ["virtualization", "Parallels", "VMware", "VirtualBox", "bhyve", "XEN"]
showBookMenu: true
weight: 27
path: "/books/handbook/"
---

[[virtualization]]
= Virtualization
:doctype: book
:toc: macro
:toclevels: 1
:icons: font
:sectnums:
:sectnumlevels: 6
:sectnumoffset: 23
:partnums:
:source-highlighter: rouge
:experimental:
:images-path: books/handbook/virtualization/

ifdef::env-beastie[]
ifdef::backend-html5[]
:imagesdir: ../../../../images/{images-path}
endif::[]
ifndef::book[]
include::shared/authors.adoc[]
include::shared/mirrors.adoc[]
include::shared/releases.adoc[]
include::shared/attributes/attributes-{{% lang %}}.adoc[]
include::shared/{{% lang %}}/teams.adoc[]
include::shared/{{% lang %}}/mailing-lists.adoc[]
include::shared/{{% lang %}}/urls.adoc[]
toc::[]
endif::[]
ifdef::backend-pdf,backend-epub3[]
include::../../../../../shared/asciidoctor.adoc[]
endif::[]
endif::[]

ifndef::env-beastie[]
toc::[]
include::../../../../../shared/asciidoctor.adoc[]
endif::[]

[[virtualization-synopsis]]
== Synopsis

Virtualization software allows multiple operating systems to run simultaneously on the same computer.
Such software systems for PCs often involve a host operating system which runs the virtualization software and supports any number of guest operating systems.

After reading this chapter, you will know:

* The difference between a host operating system and a guest operating system.
* How to install FreeBSD on the following virtualization platforms:
** Parallels Desktop(Intel(R)-based Apple(R) macOS(R))
** VMware Fusion(Intel(R)-based Apple(R) macOS(R))
** VirtualBox(TM)(Microsoft(R) Windows(R), Intel(R)-based Apple(R) macOS(R), Linux)
** bhyve(FreeBSD)
* How to tune a FreeBSD system for best performance under virtualization.

Before reading this chapter, you should:

* Understand the crossref:basics[basics,basics of UNIX(R) and FreeBSD].
* Know how to crossref:bsdinstall[bsdinstall,install FreeBSD].
* Know how to crossref:advanced-networking[advanced-networking,set up a network connection].
* Know how to crossref:ports[ports,install additional third-party software].

[[virtualization-guest-parallelsdesktop]]
== FreeBSD as a Guest on Parallels Desktop for macOS(R)

Parallels Desktop for Mac(R) is a commercial software product available for Intel(R) based Apple(R) Mac(R) computers running macOS(R) 10.4.6 or higher.
FreeBSD is a fully supported guest operating system.
Once Parallels has been installed on macOS(R), the user must configure a virtual machine and then install the desired guest operating system.

[[virtualization-guest-parallelsdesktop-install]]
=== Installing FreeBSD on Parallels Desktop on Mac(R)

The first step in installing FreeBSD on Parallels is to create a new virtual machine for installing FreeBSD.
Select [.guimenuitem]#FreeBSD# as the menu:Guest OS Type[] when prompted:

image::parallels-freebsd1.png[Parallels setup wizard showing FreeBSD as chosen OS]

Choose a reasonable amount of disk and memory depending on the plans for this virtual FreeBSD instance.
4GB of disk space and 512MB of RAM work well for most uses of FreeBSD under Parallels:

image::parallels-freebsd2.png[Parallels setup wizard showing the amount of RAM allocated]

image::parallels-freebsd3.png[Parallels setup wizard showing the disk menu]

image::parallels-freebsd4.png[Parallels setup wizard showing the menu for setting the disk size and type]

image::parallels-freebsd5.png[Parallels setup wizard showing the menu for setting the disk location]

Select the type of networking and a network interface:

image::parallels-freebsd6.png[Parallels setup wizard showing the network menu]

image::parallels-freebsd7.png[Parallels setup wizard showing the menu with the network type options]

Save and finish the configuration:

image::parallels-freebsd8.png[Parallels setup wizard showing the menu to configure the name of the machine and the directory where to save the configuration]

image::parallels-freebsd9.png[Parallels setup wizard indicating that the configuration is complete and asking the user if he wants to start guest OS installation]

After the FreeBSD virtual machine has been created, FreeBSD can be installed on it.
This is best done with an official FreeBSD CD/DVD or with an ISO image downloaded from an official FTP site.
Copy the appropriate ISO image to the local Mac(R) filesystem or insert a CD/DVD in the Mac(R)'s CD-ROM drive.
Click on the disc icon in the bottom right corner of the FreeBSD Parallels window.
This will bring up a window that can be used to associate the CD-ROM drive in the virtual machine with the ISO file on disk or with the real CD-ROM drive.

image::parallels-freebsd11.png[Parallels showing a summary of the newly created machine with information and actions to execute on the machine]

Once this association with the CD-ROM source has been made, reboot the FreeBSD virtual machine by clicking the reboot icon.
Parallels will reboot with a special BIOS that first checks if there is a CD-ROM.

image::parallels-freebsd10.png[Parallels showing the BIOS running]

In this case it will find the FreeBSD installation media and begin a normal FreeBSD installation.
Perform the installation, but do not attempt to configure Xorg at this time.

image::parallels-freebsd12.png[Parallels showing a snippet of the FreeBSD installation process]

When the installation is finished, reboot into the newly installed FreeBSD virtual machine.

image::parallels-freebsd13.png[Parallels showing the boot of FreeBSD]

[[virtualization-guest-parallels-configure]]
=== Configuring FreeBSD on Parallels

After FreeBSD has been successfully installed on macOS(R) X with Parallels, there are a number of configuration steps that can be taken to optimize the system for virtualized operation.

[.procedure]
. Set Boot Loader Variables
+
The most important step is to reduce the `kern.hz` tunable to reduce the CPU utilization of FreeBSD under the Parallels environment.
This is accomplished by adding the following line to [.filename]#/boot/loader.conf#:
+
[.programlisting]
....
kern.hz=100
....
+
Without this setting, an idle FreeBSD Parallels guest will use roughly 15% of the CPU of a single processor iMac(R).
After this change the usage will be closer to 5%.
. Create a New Kernel Configuration File
+
All of the SCSI, FireWire, and USB device drivers can be removed from a custom kernel configuration file.
Parallels provides a virtual network adapter used by the man:ed[4] driver, so all network devices except for man:ed[4] and man:miibus[4] can be removed from the kernel.
. Configure Networking
+
The most basic networking setup uses DHCP to connect the virtual machine to the same local area network as the host Mac(R).
This can be accomplished by adding `ifconfig_ed0="DHCP"` to [.filename]#/etc/rc.conf#.
More advanced networking setups are described in crossref:advanced-networking[advanced-networking,Advanced Networking].

[[virtualization-guest-vmware]]
== FreeBSD as a Guest on VMware Fusion for macOS(R)

VMware Fusion for Mac(R) is a commercial software product available for Intel(R) based Apple(R) Mac(R) computers running macOS(R) 10.11 or higher.
FreeBSD is a fully supported guest operating system.
Once VMware Fusion has been installed on macOS(R), the user can configure a virtual machine and then install the desired guest operating system.

[[virtualization-guest-vmware-install]]
=== Installing FreeBSD on VMware Fusion

The first step is to start VMware Fusion which will load the Virtual Machine Library.
Click [.guimenuitem]#+->New# to create the virtual machine:

image::vmware-freebsd01.png[width=35%]

This will load the New Virtual Machine Assistant.
Choose [.guimenuitem]#Create a custom virtual machine# and click [.guimenuitem]#Continue# to proceed:

image::vmware-freebsd02.png[width=45%]

Select [.guimenuitem]#Other# as the [.guimenuitem]#Operating System# and either [.guimenuitem]#FreeBSD X# or [.guimenuitem]#FreeBSD X 64-bit#, as the menu:Version[] when prompted:

image::vmware-freebsd03.png[width=45%]

Choose the firmware(UEFI is recommended):

image::vmware-freebsd04.png[width=45%]

Choose [.guimenuitem]#Create a new virtual disk# and click [.guimenuitem]#Continue#:

image::vmware-freebsd05.png[width=45%]

Check the configuration and click [.guimenuitem]#Finish#:

image::vmware-freebsd06.png[width=45%]

Choose the name of the virtual machine and the directory where it should be saved:

image::vmware-freebsd07.png[width=45%]

Press command+E to open virtual machine settings and click [.guimenuitem]#CD/DVD#:

image::vmware-freebsd08.png[width=45%]

Choose FreeBSD ISO image or from a CD/DVD:

image::vmware-freebsd09.png[width=45%]

Start the virtual machine:

image::vmware-freebsd10.png[width=25%]

Install FreeBSD as usual:

image::vmware-freebsd11.png[width=25%]

Once the install is complete, the settings of the virtual machine can be modified, such as memory usage and the number of CPUs the virtual machine will have access to:

[NOTE]
====
The System Hardware settings of the virtual machine cannot be modified while the virtual machine is running.
====

image::vmware-freebsd12.png[width=45%]

The status of the CD-ROM device.
Normally the CD/DVD/ISO is disconnected from the virtual machine when it is no longer needed.

image::vmware-freebsd09.png[width=45%]

The last thing to change is how the virtual machine will connect to the network.
To allow connections to the virtual machine from other machines besides the host, choose [.guimenuitem]#Connect directly to the physical network (Bridged)#.
Otherwise, [.guimenuitem]#Share the host's internet connection (NAT)# is preferred so that the virtual machine can have access to the Internet, but the network cannot access the virtual machine.

image::vmware-freebsd13.png[width=45%]

After modifying the settings, boot the newly installed FreeBSD virtual machine.

[[virtualization-guest-vmware-configure]]
=== Configuring FreeBSD on VMware Fusion

After FreeBSD has been successfully installed on macOS(R) X with VMware Fusion, there are a number of configuration steps that can be taken to optimize the system for virtualized operation.

[.procedure]
. Set Boot Loader Variables
+
The most important step is to reduce the `kern.hz` tunable to reduce the CPU utilization of FreeBSD under the VMware Fusion environment.
This is accomplished by adding the following line to [.filename]#/boot/loader.conf#:
+
[.programlisting]
....
kern.hz=100
....
+
Without this setting, an idle FreeBSD VMware Fusion guest will use roughly 15% of the CPU of a single processor iMac(R).
After this change, the usage will be closer to 5%.
. Create a New Kernel Configuration File
+
All of the FireWire, and USB device drivers can be removed from a custom kernel configuration file.
VMware Fusion provides a virtual network adapter used by the man:em[4] driver, so all network devices except for man:em[4] can be removed from the kernel.
. Configure Networking
+
The most basic networking setup uses DHCP to connect the virtual machine to the same local area network as the host Mac(R).
This can be accomplished by adding `ifconfig_em0="DHCP"` to [.filename]#/etc/rc.conf#.
More advanced networking setups are described in crossref:advanced-networking[advanced-networking,Advanced Networking].
+
. Install drivers and open-vm-tools
+
To run FreeBSD smoothly on VMWare, drivers should be installed:
+
[source,shell]
....
# pkg install xf86-video-vmware xf86-input-vmmouse open-vm-tools
....

[[virtualization-guest-virtualbox]]
== FreeBSD as a Guest on VirtualBox(TM)

FreeBSD works well as a guest in VirtualBox(TM).
The virtualization software is available for most common operating systems, including FreeBSD itself.

The VirtualBox(TM) guest additions provide support for:

* Clipboard sharing.
* Mouse pointer integration.
* Host time synchronization.
* Window scaling.
* Seamless mode.

[NOTE]
====
These commands are run in the FreeBSD guest.
====

First, install the package:emulators/virtualbox-ose-additions[] package or port in the FreeBSD guest.
This will install the port:

[source,shell]
....
# cd /usr/ports/emulators/virtualbox-ose-additions && make install clean
....

Add these lines to [.filename]#/etc/rc.conf#:

[.programlisting]
....
vboxguest_enable="YES"
vboxservice_enable="YES"
....

If man:ntpd[8] or man:ntpdate[8] is used, disable host time synchronization:

[.programlisting]
....
vboxservice_flags="--disable-timesync"
....

Xorg will automatically recognize the `vboxvideo` driver.
It can also be manually entered in [.filename]#/etc/X11/xorg.conf#:

[.programlisting]
....
Section "Device"
	Identifier "Card0"
	Driver "vboxvideo"
	VendorName "InnoTek Systemberatung GmbH"
	BoardName "VirtualBox Graphics Adapter"
EndSection
....

To use the `vboxmouse` driver, adjust the mouse section in [.filename]#/etc/X11/xorg.conf#:

[.programlisting]
....
Section "InputDevice"
	Identifier "Mouse0"
	Driver "vboxmouse"
EndSection
....

HAL users should create the following [.filename]#/usr/local/etc/hal/fdi/policy/90-vboxguest.fdi# or copy it from [.filename]#/usr/local/share/hal/fdi/policy/10osvendor/90-vboxguest.fdi#:

[.programlisting]
....
<?xml version="1.0" encoding="utf-8"?>
<!--
# Sun VirtualBox
# Hal driver description for the vboxmouse driver
# $Id: chapter.xml,v 1.33 2012-03-17 04:53:52 eadler Exp $

	Copyright (C) 2008-2009 Sun Microsystems, Inc.

	This file is part of VirtualBox Open Source Edition (OSE, as
	available from http://www.virtualbox.org. This file is free software;
	you can redistribute it and/or modify it under the terms of the GNU
	General Public License (GPL) as published by the Free Software
	Foundation, in version 2 as it comes in the "COPYING" file of the
	VirtualBox OSE distribution. VirtualBox OSE is distributed in the
	hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.

	Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
	Clara, CA 95054 USA or visit http://www.sun.com if you need
	additional information or have any questions.
-->
<deviceinfo version="0.2">
  <device>
    <match key="info.subsystem" string="pci">
      <match key="info.product" string="VirtualBox guest Service">
        <append key="info.capabilities" type="strlist">input</append>
	<append key="info.capabilities" type="strlist">input.mouse</append>
        <merge key="input.x11_driver" type="string">vboxmouse</merge>
	<merge key="input.device" type="string">/dev/vboxguest</merge>
      </match>
    </match>
  </device>
</deviceinfo>
....

Shared folders for file transfers between host and VM are accessible by mounting them using `mount_vboxvfs`.
A shared folder can be created on the host using the VirtualBox GUI or via `vboxmanage`.
For example, to create a shared folder called _myshare_ under [.filename]#/mnt/bsdboxshare# for the VM named _BSDBox_, run:

[source,shell]
....
# vboxmanage sharedfolder add 'BSDBox' --name myshare --hostpath /mnt/bsdboxshare
....

Note that the shared folder name must not contain spaces.
Mount the shared folder from within the guest system like this:

[source,shell]
....
# mount_vboxvfs -w myshare /mnt
....

[[virtualization-host-virtualbox]]
== FreeBSD as a Host with VirtualBox(TM)

VirtualBox(TM) is an actively developed, complete virtualization package, that is available for most operating systems including Windows(R), macOS(R), Linux(R) and FreeBSD.
It is equally capable of running Windows(R) or UNIX(R)-like guests.
It is released as open source software, but with closed-source components available in a separate extension pack.
These components include support for USB 2.0 devices.
More information may be found on the http://www.virtualbox.org/wiki/Downloads[Downloads page of the VirtualBox(TM) wiki].
Currently, these extensions are not available for FreeBSD.

[[virtualization-virtualbox-install]]
=== Installing VirtualBox(TM)

VirtualBox(TM) is available as a FreeBSD package or port in package:emulators/virtualbox-ose[].
The port can be installed using these commands:

[source,shell]
....
# cd /usr/ports/emulators/virtualbox-ose
# make install clean
....

One useful option in the port's configuration menu is the `GuestAdditions` suite of programs.
These provide a number of useful features in guest operating systems, like mouse pointer integration (allowing the mouse to be shared between host and guest without the need to press a special keyboard shortcut to switch) and faster video rendering, especially in Windows(R) guests.
The guest additions are available in the menu:Devices[] menu, after the installation of the guest is finished.

A few configuration changes are needed before VirtualBox(TM) is started for the first time.
The port installs a kernel module in [.filename]#/boot/modules# which must be loaded into the running kernel:

[source,shell]
....
# kldload vboxdrv
....

To ensure the module is always loaded after a reboot, add this line to [.filename]#/boot/loader.conf#:

[.programlisting]
....
vboxdrv_load="YES"
....

To use the kernel modules that allow bridged or host-only networking, add this line to [.filename]#/etc/rc.conf# and reboot the computer:

[.programlisting]
....
vboxnet_enable="YES"
....

The `vboxusers` group is created during installation of VirtualBox(TM).
All users that need access to VirtualBox(TM) will have to be added as members of this group.
`pw` can be used to add new members:

[source,shell]
....
# pw groupmod vboxusers -m yourusername
....

The default permissions for [.filename]#/dev/vboxnetctl# are restrictive and need to be changed for bridged networking:

[source,shell]
....
# chown root:vboxusers /dev/vboxnetctl
# chmod 0660 /dev/vboxnetctl
....

To make this permissions change permanent, add these lines to [.filename]#/etc/devfs.conf#:

[.programlisting]
....
own     vboxnetctl root:vboxusers
perm    vboxnetctl 0660
....

To launch VirtualBox(TM), type from an Xorg session:

[source,shell]
....
% VirtualBox
....

For more information on configuring and using VirtualBox(TM), refer to the http://www.virtualbox.org[official website].
For FreeBSD-specific information and troubleshooting instructions, refer to the http://wiki.FreeBSD.org/VirtualBox[relevant page in the FreeBSD wiki].

[[virtualization-virtualbox-usb-support]]
=== VirtualBox(TM) USB Support

VirtualBox(TM) can be configured to pass USB devices through to the guest operating system.
The host controller of the OSE version is limited to emulating USB 1.1 devices until the extension pack supporting USB 2.0 and 3.0 devices becomes available on FreeBSD.

For VirtualBox(TM) to be aware of USB devices attached to the machine, the user needs to be a member of the `operator` group.

[source,shell]
....
# pw groupmod operator -m yourusername
....

Then, add the following to [.filename]#/etc/devfs.rules#, or create this file if it does not exist yet:

[.programlisting]
....
[system=10]
add path 'usb/*' mode 0660 group operator
....

To load these new rules, add the following to [.filename]#/etc/rc.conf#:

[.programlisting]
....
devfs_system_ruleset="system"
....

Then, restart devfs:

[source,shell]
....
# service devfs restart
....

Restart the login session and VirtualBox(TM) for these changes to take effect, and create USB filters as necessary.

[[virtualization-virtualbox-host-dvd-cd-access]]
=== VirtualBox(TM) Host DVD/CD Access

Access to the host DVD/CD drives from guests is achieved through the sharing of the physical drives.
Within VirtualBox(TM), this is set up from the Storage window in the Settings of the virtual machine.
If needed, create an empty IDECD/DVD device first.
Then choose the Host Drive from the popup menu for the virtual CD/DVD drive selection.
A checkbox labeled `Passthrough` will appear. This allows the virtual machine to use the hardware directly.
For example, audio CDs or the burner will only function if this option is selected.

HAL needs to run for VirtualBox(TM)DVD/CD functions to work, so enable it in [.filename]#/etc/rc.conf# and start it if it is not already running:

[.programlisting]
....
hald_enable="YES"
....

[source,shell]
....
# service hald start
....

In order for users to be able to use VirtualBox(TM)DVD/CD functions, they need access to [.filename]#/dev/xpt0#, [.filename]#/dev/cdN#, and [.filename]#/dev/passN#.
This is usually achieved by making the user a member of `operator`.
Permissions to these devices have to be corrected by adding these lines to [.filename]#/etc/devfs.conf#:

[.programlisting]
....
perm cd* 0660
perm xpt0 0660
perm pass* 0660
....

[source,shell]
....
# service devfs restart
....

[[virtualization-host-bhyve]]
== FreeBSD as a Host with bhyve

The bhyve BSD-licensed hypervisor became part of the base system with FreeBSD 10.0-RELEASE.
This hypervisor supports a number of guests, including FreeBSD, OpenBSD, and many Linux(R) distributions.
By default, bhyve provides access to serial console and does not emulate a graphical console.
Virtualization offload features of newer CPUs are used to avoid the legacy methods of translating instructions and manually managing memory mappings.

The bhyve design requires a processor that supports Intel(R) Extended Page Tables (EPT) or AMD(R) Rapid Virtualization Indexing (RVI) or Nested Page Tables (NPT).
Hosting Linux(R) guests or FreeBSD guests with more than one vCPU requires VMX unrestricted mode support (UG).
Most newer processors, specifically the Intel(R) Core(TM) i3/i5/i7 and Intel(R) Xeon(TM) E3/E5/E7, support these features.
UG support was introduced with Intel's Westmere micro-architecture.
For a complete list of Intel(R) processors that support EPT, refer to https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&0_ExtendedPageTables=True[].
RVI is found on the third generation and later of the AMD Opteron(TM) (Barcelona) processors.
The easiest way to tell if a processor supports bhyve is to run `dmesg` or look in [.filename]#/var/run/dmesg.boot# for the `POPCNT` processor feature flag on the `Features2` line for AMD(R) processors or `EPT` and `UG` on the `VT-x` line for Intel(R) processors.

[[virtualization-bhyve-prep]]
=== Preparing the Host

The first step to creating a virtual machine in bhyve is configuring the host system.
First, load the bhyve kernel module:

[source,shell]
....
# kldload vmm
....

Then, create a [.filename]#tap# interface for the network device in the virtual machine to attach to.
In order for the network device to participate in the network, also create a bridge interface containing the [.filename]#tap# interface and the physical interface as members.
In this example, the physical interface is _igb0_:

[source,shell]
....
# ifconfig tap0 create
# sysctl net.link.tap.up_on_open=1
net.link.tap.up_on_open: 0 -> 1
# ifconfig bridge0 create
# ifconfig bridge0 addm igb0 addm tap0
# ifconfig bridge0 up
....

[[virtualization-bhyve-freebsd]]
=== Creating a FreeBSD Guest

Create a file to use as the virtual disk for the guest machine.
Specify the size and name of the virtual disk:

[source,shell]
....
# truncate -s 16G guest.img
....

Download an installation image of FreeBSD to install:

[source,shell]
....
# fetch https://download.freebsd.org/releases/ISO-IMAGES/13.1/FreeBSD-13.1-RELEASE-amd64-bootonly.iso
FreeBSD-13.1-RELEASE-amd64-bootonly.iso                366 MB   16 MBps    22s
....

FreeBSD comes with an example script for running a virtual machine in bhyve.
The script will start the virtual machine and run it in a loop, so it will automatically restart if it crashes.
The script takes a number of options to control the configuration of the machine: `-c` controls the number of virtual CPUs, `-m` limits the amount of memory available to the guest, `-t` defines which [.filename]#tap# device to use, `-d` indicates which disk image to use, `-i` tells bhyve to boot from the CD image instead of the disk, and `-I` defines which CD image to use.
The last parameter is the name of the virtual machine, used to track the running machines.
This example starts the virtual machine in installation mode:

[source,shell]
....
# sh /usr/share/examples/bhyve/vmrun.sh -c 1 -m 1024M -t tap0 -d guest.img -i -I FreeBSD-13.1-RELEASE-amd64-bootonly.iso guestname
....

The virtual machine will boot and start the installer.
After installing a system in the virtual machine, when the system asks about dropping in to a shell at the end of the installation, choose btn:[Yes].

Reboot the virtual machine.
While rebooting the virtual machine causes bhyve to exit, the [.filename]#vmrun.sh# script runs `bhyve` in a loop and will automatically restart it.
When this happens, choose the reboot option from the boot loader menu in order to escape the loop.
Now the guest can be started from the virtual disk:

[source,shell]
....
# sh /usr/share/examples/bhyve/vmrun.sh -c 4 -m 1024M -t tap0 -d guest.img guestname
....

[[virtualization-bhyve-linux]]
=== Creating a Linux(R) Guest

In order to boot operating systems other than FreeBSD, the package:sysutils/grub2-bhyve[] port must be first installed.

Next, create a file to use as the virtual disk for the guest machine:

[source,shell]
....
# truncate -s 16G linux.img
....

Starting a virtual machine with bhyve is a two step process.
First a kernel must be loaded, then the guest can be started.
The Linux(R) kernel is loaded with package:sysutils/grub2-bhyve[].
Create a [.filename]#device.map# that grub will use to map the virtual devices to the files on the host system:

[.programlisting]
....
(hd0) ./linux.img
(cd0) ./somelinux.iso
....

Use package:sysutils/grub2-bhyve[] to load the Linux(R) kernel from the ISO image:

[source,shell]
....
# grub-bhyve -m device.map -r cd0 -M 1024M linuxguest
....

This will start grub.
If the installation CD contains a [.filename]#grub.cfg#, a menu will be displayed.
If not, the `vmlinuz` and `initrd` files must be located and loaded manually:

[source,shell]
....
grub> ls
(hd0) (cd0) (cd0,msdos1) (host)
grub> ls (cd0)/isolinux
boot.cat boot.msg grub.conf initrd.img isolinux.bin isolinux.cfg memtest
splash.jpg TRANS.TBL vesamenu.c32 vmlinuz
grub> linux (cd0)/isolinux/vmlinuz
grub> initrd (cd0)/isolinux/initrd.img
grub> boot
....

Now that the Linux(R) kernel is loaded, the guest can be started:

[source,shell]
....
# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s 3:0,virtio-blk,./linux.img \
    -s 4:0,ahci-cd,./somelinux.iso -l com1,stdio -c 4 -m 1024M linuxguest
....

The system will boot and start the installer.
After installing a system in the virtual machine, reboot the virtual machine.
This will cause bhyve to exit.
The instance of the virtual machine needs to be destroyed before it can be started again:

[source,shell]
....
# bhyvectl --destroy --vm=linuxguest
....

Now the guest can be started directly from the virtual disk.
Load the kernel:

[source,shell]
....
# grub-bhyve -m device.map -r hd0,msdos1 -M 1024M linuxguest
grub> ls
(hd0) (hd0,msdos2) (hd0,msdos1) (cd0) (cd0,msdos1) (host)
(lvm/VolGroup-lv_swap) (lvm/VolGroup-lv_root)
grub> ls (hd0,msdos1)/
lost+found/ grub/ efi/ System.map-2.6.32-431.el6.x86_64 config-2.6.32-431.el6.x
86_64 symvers-2.6.32-431.el6.x86_64.gz vmlinuz-2.6.32-431.el6.x86_64
initramfs-2.6.32-431.el6.x86_64.img
grub> linux (hd0,msdos1)/vmlinuz-2.6.32-431.el6.x86_64 root=/dev/mapper/VolGroup-lv_root
grub> initrd (hd0,msdos1)/initramfs-2.6.32-431.el6.x86_64.img
grub> boot
....

Boot the virtual machine:

[source,shell]
....
# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 \
    -s 3:0,virtio-blk,./linux.img -l com1,stdio -c 4 -m 1024M linuxguest
....

Linux(R) will now boot in the virtual machine and eventually present you with the login prompt.
Login and use the virtual machine.
When you are finished, reboot the virtual machine to exit bhyve.
Destroy the virtual machine instance:

[source,shell]
....
# bhyvectl --destroy --vm=linuxguest
....

[[virtualization-bhyve-uefi]]
=== Booting bhyve Virtual Machines with UEFI Firmware

In addition to bhyveload and grub-bhyve, the bhyve hypervisor can also boot virtual machines using the UEFI userspace firmware.
This option may support guest operating systems that are not supported by the other loaders.

In order to make use of the UEFI support in bhyve, first obtain the UEFI firmware images.
This can be done by installing package:sysutils/bhyve-firmware[] port or package.

With the firmware in place, add the flags `-l bootrom,_/path/to/firmware_` to your bhyve command line.
The actual bhyve command may look like this:

[source,shell]
....
# bhyve -AHP -s 0:0,hostbridge -s 1:0,lpc \
-s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \
-s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
guest
....

package:sysutils/bhyve-firmware[] also contains a CSM-enabled firmware, to boot guests with no UEFI support in legacy BIOS mode:

[source,shell]
....
# bhyve -AHP -s 0:0,hostbridge -s 1:0,lpc \
-s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \
-s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI_CSM.fd \
guest
....

[[virtualization-bhyve-framebuffer]]
=== Graphical UEFI Framebuffer for bhyve Guests

The UEFI firmware support is particularly useful with predominantly graphical guest operating systems such as Microsoft Windows(R).

Support for the UEFI-GOP framebuffer may also be enabled with the `-s 29,fbuf,tcp=_0.0.0.0:5900_` flags.
The framebuffer resolution may be configured with `w=_800_` and `h=_600_`, and bhyve can be instructed to wait for a VNC connection before booting the guest by adding `wait`.
The framebuffer may be accessed from the host or over the network via the VNC protocol.
Additionally, `-s 30,xhci,tablet` can be added to achieve precise mouse cursor synchronization with the host.

The resulting bhyve command would look like this:

[source,shell]
....
# bhyve -AHP -s 0:0,hostbridge -s 31:0,lpc \
-s 2:0,virtio-net,tap1 -s 3:0,virtio-blk,./disk.img \
-s 4:0,ahci-cd,./install.iso -c 4 -m 1024M \
-s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600,wait \
-s 30,xhci,tablet \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
guest
....

Note, in BIOS emulation mode, the framebuffer will cease receiving updates once control is passed from firmware to guest operating system.

[[virtualization-bhyve-zfs]]
=== Using ZFS with bhyve Guests

If ZFS is available on the host machine, using ZFS volumes instead of disk image files can provide significant performance benefits for the guest VMs.
A ZFS volume can be created by:

[source,shell]
....
# zfs create -V16G -o volmode=dev zroot/linuxdisk0
....

When starting the VM, specify the ZFS volume as the disk drive:

[source,shell]
....
# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s3:0,virtio-blk,/dev/zvol/zroot/linuxdisk0 \
    -l com1,stdio -c 4 -m 1024M linuxguest
....

[[virtualization-bhyve-nmdm]]
=== Virtual Machine Consoles

It is advantageous to wrap the bhyve console in a session management tool such as package:sysutils/tmux[] or package:sysutils/screen[] in order to detach and reattach to the console.
It is also possible to have the console of bhyve be a null modem device that can be accessed with `cu`.
To do this, load the [.filename]#nmdm# kernel module and replace `-l com1,stdio` with `-l com1,/dev/nmdm0A`.
The [.filename]#/dev/nmdm# devices are created automatically as needed, where each is a pair, corresponding to the two ends of the null modem cable ([.filename]#/dev/nmdm0A# and [.filename]#/dev/nmdm0B#).
See man:nmdm[4] for more information.

[source,shell]
....
# kldload nmdm
# bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s 3:0,virtio-blk,./linux.img \
    -l com1,/dev/nmdm0A -c 4 -m 1024M linuxguest
# cu -l /dev/nmdm0B
Connected

Ubuntu 13.10 handbook ttyS0

handbook login:
....

[[virtualization-bhyve-managing]]
=== Managing Virtual Machines

A device node is created in [.filename]#/dev/vmm# for each virtual machine.
This allows the administrator to easily see a list of the running virtual machines:

[source,shell]
....
# ls -al /dev/vmm
total 1
dr-xr-xr-x   2 root  wheel    512 Mar 17 12:19 ./
dr-xr-xr-x  14 root  wheel    512 Mar 17 06:38 ../
crw-------   1 root  wheel  0x1a2 Mar 17 12:20 guestname
crw-------   1 root  wheel  0x19f Mar 17 12:19 linuxguest
crw-------   1 root  wheel  0x1a1 Mar 17 12:19 otherguest
....

A specified virtual machine can be destroyed using `bhyvectl`:

[source,shell]
....
# bhyvectl --destroy --vm=guestname
....

[[virtualization-bhyve-onboot]]
=== Persistent Configuration

In order to configure the system to start bhyve guests at boot time, the following configurations must be made in the specified files:

[.procedure]
. [.filename]#/etc/sysctl.conf#
+
[.programlisting]
....
net.link.tap.up_on_open=1
....

. [.filename]#/etc/rc.conf#
+
[.programlisting]
....
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0="addm igb0 addm tap0"
kld_list="nmdm vmm"
....

[[virtualization-host-xen]]
== FreeBSD as a Xen(TM)-Host

Xen is a GPLv2-licensed https://en.wikipedia.org/wiki/Hypervisor#Classification[type 1 hypervisor] for Intel(R) and ARM(R) architectures.
FreeBSD has included i386(TM) and AMD(R) 64-Bit https://wiki.xenproject.org/wiki/DomU[DomU] and https://en.wikipedia.org/wiki/Amazon_Elastic_Compute_Cloud[Amazon EC2] unprivileged domain (virtual machine) support since FreeBSD 8.0 and includes Dom0 control domain (host) support in FreeBSD 11.0.
Support for para-virtualized (PV) domains has been removed from FreeBSD 11 in favor of hardware virtualized (HVM) domains, which provides better performance.

Xen(TM) is a bare-metal hypervisor, which means that it is the first program loaded after the BIOS.
A special privileged guest called the Domain-0 (`Dom0` for short) is then started.
The Dom0 uses its special privileges to directly access the underlying physical hardware, making it a high-performance solution.
It is able to access the disk controllers and network adapters directly.
The Xen(TM) management tools to manage and control the Xen(TM) hypervisor are also used by the Dom0 to create, list, and destroy VMs.
Dom0 provides virtual disks and networking for unprivileged domains, often called `DomU`.
Xen(TM) Dom0 can be compared to the service console of other hypervisor solutions, while the DomU is where individual guest VMs are run.

Xen(TM) can migrate VMs between different Xen(TM) servers.
When the two xen hosts share the same underlying storage, the migration can be done without having to shut the VM down first.
Instead, the migration is performed live while the DomU is running and there is no need to restart it or plan a downtime.
This is useful in maintenance scenarios or upgrade windows to ensure that the services provided by the DomU are still provided.
Many more features of Xen(TM) are listed on the https://wiki.xenproject.org/wiki/Category:Overview[Xen Wiki Overview page].
Note that not all features are supported on FreeBSD yet.

[[virtualization-host-xen-requirements]]
=== Hardware Requirements for Xen(TM) Dom0

To run the Xen(TM) hypervisor on a host, certain hardware functionality is required.
Hardware virtualized domains require Extended Page Table (http://en.wikipedia.org/wiki/Extended_Page_Table[EPT]) and Input/Output Memory Management Unit (http://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware[IOMMU]) support in the host processor.

[NOTE]
====
In order to run a FreeBSD Xen(TM) Dom0 the box must be booted using legacy boot (BIOS).
====

[[virtualization-host-xen-dom0-setup]]
=== Xen(TM) Dom0 Control Domain Setup

Users of FreeBSD 11 should install the package:emulators/xen-kernel47[] and package:sysutils/xen-tools47[] packages that are based on Xen version 4.7. Systems running on FreeBSD-12.0 or newer can use Xen 4.11 provided by package:emulators/xen-kernel411[] and package:sysutils/xen-tools411[], respectively.

Configuration files must be edited to prepare the host for the Dom0 integration after the Xen packages are installed.
An entry to [.filename]#/etc/sysctl.conf# disables the limit on how many pages of memory are allowed to be wired.
Otherwise, DomU VMs with higher memory requirements will not run.

[source,shell]
....
# echo 'vm.max_wired=-1' >> /etc/sysctl.conf
....

Another memory-related setting involves changing [.filename]#/etc/login.conf#, setting the `memorylocked` option to `unlimited`.
Otherwise, creating DomU domains may fail with `Cannot allocate memory` errors.
After making the change to [.filename]#/etc/login.conf#, run `cap_mkdb` to update the capability database.
See crossref:security[security-resourcelimits,"Resource Limits"] for details.

[source,shell]
....
# sed -i '' -e 's/memorylocked=64K/memorylocked=unlimited/' /etc/login.conf
# cap_mkdb /etc/login.conf
....

Add an entry for the Xen(TM) console to [.filename]#/etc/ttys#:

[source,shell]
....
# echo 'xc0     "/usr/libexec/getty Pc"         xterm   onifconsole  secure' >> /etc/ttys
....

Selecting a Xen(TM) kernel in [.filename]#/boot/loader.conf# activates the Dom0.
Xen(TM) also requires resources like CPU and memory from the host machine for itself and other DomU domains.
How much CPU and memory depends on the individual requirements and hardware capabilities.
In this example, 8 GB of memory and 4 virtual CPUs are made available for the Dom0.
The serial console is also activated and logging options are defined.

The following command is used for Xen 4.7 packages:

[source,shell]
....
# echo 'hw.pci.mcfg=0' >> /boot/loader.conf
# echo 'if_tap_load="YES"' >> /boot/loader.conf
# echo 'xen_kernel="/boot/xen"' >> /boot/loader.conf
# echo 'xen_cmdline="dom0_mem=8192M dom0_max_vcpus=4 dom0pvh=1 console=com1,vga com1=115200,8n1 guest_loglvl=all loglvl=all"' >> /boot/loader.conf
....

For Xen versions 4.11 and higher, the following command should be used instead:

[source,shell]
....
# echo 'if_tap_load="YES"' >> /boot/loader.conf
# echo 'xen_kernel="/boot/xen"' >> /boot/loader.conf
# echo 'xen_cmdline="dom0_mem=8192M dom0_max_vcpus=4 dom0=pvh console=com1,vga com1=115200,8n1 guest_loglvl=all loglvl=all"' >> /boot/loader.conf
....

[TIP]
====

Log files that Xen(TM) creates for the DomU VMs are stored in [.filename]#/var/log/xen#.
Please be sure to check the contents of that directory if experiencing issues.
====

Activate the xencommons service during system startup:

[source,shell]
....
# sysrc xencommons_enable=yes
....

These settings are enough to start a Dom0-enabled system.
However, it lacks network functionality for the DomU machines.
To fix that, define a bridged interface with the main NIC of the system which the DomU VMs can use to connect to the network.
Replace _em0_ with the host network interface name.

[source,shell]
....
# sysrc cloned_interfaces="bridge0"
# sysrc ifconfig_bridge0="addm em0 SYNCDHCP"
# sysrc ifconfig_em0="up"
....

Restart the host to load the Xen(TM) kernel and start the Dom0.

[source,shell]
....
# reboot
....

After successfully booting the Xen(TM) kernel and logging into the system again, the Xen(TM) management tool `xl` is used to show information about the domains.

[source,shell]
....
# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  8192     4     r-----     962.0
....

The output confirms that the Dom0 (called `Domain-0`) has the ID `0` and is running.
It also has the memory and virtual CPUs that were defined in [.filename]#/boot/loader.conf# earlier.
More information can be found in the https://www.xenproject.org/help/documentation.html[Xen(TM) Documentation].
DomU guest VMs can now be created.

[[virtualization-host-xen-domu-setup]]
=== Xen(TM) DomU Guest VM Configuration

Unprivileged domains consist of a configuration file and virtual or physical hard disks.
Virtual disk storage for the DomU can be files created by man:truncate[1] or ZFS volumes as described in crossref:zfs[zfs-zfs-volume,“Creating and Destroying Volumes”].
In this example, a 20 GB volume is used.
A VM is created with the ZFS volume, a FreeBSD ISO image, 1 GB of RAM and two virtual CPUs.
The ISO installation file is retrieved with man:fetch[1] and saved locally in a file called [.filename]#freebsd.iso#.

[source,shell]
....
# fetch https://download.freebsd.org/releases/ISO-IMAGES/13.1/FreeBSD-13.1-RELEASE-amd64-bootonly.iso -o freebsd.iso
....

A ZFS volume of 20 GB called [.filename]#xendisk0# is created to serve as the disk space for the VM.

[source,shell]
....
# zfs create -V20G -o volmode=dev zroot/xendisk0
....

The new DomU guest VM is defined in a file.
Some specific definitions like name, keymap, and VNC connection details are also defined.
The following [.filename]#freebsd.cfg# contains a minimum DomU configuration for this example:

[source,shell]
....
# cat freebsd.cfg
builder = "hvm" <.>
name = "freebsd" <.>
memory = 1024 <.>
vcpus = 2 <.>
vif = [ 'mac=00:16:3E:74:34:32,bridge=bridge0' ] <.>
disk = [
'/dev/zvol/tank/xendisk0,raw,hda,rw', <.>
'/root/freebsd.iso,raw,hdc:cdrom,r' <.>
  ]
vnc = 1 <.>
vnclisten = "0.0.0.0"
serial = "pty"
usbdevice = "tablet"
....

These lines are explained in more detail:

<.> This defines what kind of virtualization to use. `hvm` refers to hardware-assisted virtualization or hardware virtual machine. Guest operating systems can run unmodified on CPUs with virtualization extensions, providing nearly the same performance as running on physical hardware. `generic` is the default value and creates a PV domain.
<.> Name of this virtual machine to distinguish it from others running on the same Dom0. Required.
<.> Quantity of RAM in megabytes to make available to the VM. This amount is subtracted from the hypervisor's total available memory, not the memory of the Dom0.
<.> Number of virtual CPUs available to the guest VM. For best performance, do not create guests with more virtual CPUs than the number of physical CPUs on the host.
<.> Virtual network adapter. This is the bridge connected to the network interface of the host. The `mac` parameter is the MAC address set on the virtual network interface. This parameter is optional, if no MAC is provided Xen(TM) will generate a random one.
<.> Full path to the disk, file, or ZFS volume of the disk storage for this VM. Options and multiple disk definitions are separated by commas.
<.> Defines the Boot medium from which the initial operating system is installed. In this example, it is the ISO image downloaded earlier. Consult the Xen(TM) documentation for other kinds of devices and options to set.
<.> Options controlling VNC connectivity to the serial console of the DomU. In order, these are: active VNC support, define IP address on which to listen, device node for the serial console, and the input method for precise positioning of the mouse and other input methods. `keymap` defines which keymap to use, and is `english` by default.

After the file has been created with all the necessary options, the DomU is created by passing it to `xl create` as a parameter.

[source,shell]
....
# xl create freebsd.cfg
....

[NOTE]
====
Each time the Dom0 is restarted, the configuration file must be passed to `xl create` again to re-create the DomU.
By default, only the Dom0 is created after a reboot, not the individual VMs.
The VMs can continue where they left off as they stored the operating system on the virtual disk.
The virtual machine configuration can change over time (for example, when adding more memory).
The virtual machine configuration files must be properly backed up and kept available to be able to re-create the guest VM when needed.
====

The output of `xl list` confirms that the DomU has been created.

[source,shell]
....
# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  8192     4     r-----  1653.4
freebsd                                      1  1024     1     -b----   663.9
....

To begin the installation of the base operating system, start the VNC client, directing it to the main network address of the host or to the IP address defined on the `vnclisten` line of [.filename]#freebsd.cfg#.
After the operating system has been installed, shut down the DomU and disconnect the VNC viewer.
Edit [.filename]#freebsd.cfg#, removing the line with the `cdrom` definition or commenting it out by inserting a `+#+` character at the beginning of the line.
To load this new configuration, it is necessary to remove the old DomU with `xl destroy`, passing either the name or the id as the parameter.
Afterwards, recreate it using the modified [.filename]*freebsd.cfg*.

[source,shell]
....
# xl destroy freebsd
# xl create freebsd.cfg
....

The machine can then be accessed again using the VNC viewer.
This time, it will boot from the virtual disk where the operating system has been installed and can be used as a virtual machine.

[[virtualization-host-xen-troubleshooting]]
=== Troubleshooting

This section contains basic information in order to help troubleshoot issues found when using FreeBSD as a Xen(TM) host or guest.

[[virtualization-host-xen-troubleshooting-host]]
==== Host Boot Troubleshooting

Please note that the following troubleshooting tips are intended for Xen(TM) 4.11 or newer.
If you are still using Xen(TM) 4.7 and having issues consider migrating to a newer version of Xen(TM).

In order to troubleshoot host boot issues you will likely need a serial cable, or a debug USB cable.
Verbose Xen(TM) boot output can be obtained by adding options to the `xen_cmdline` option found in [.filename]#loader.conf#.
A couple of relevant debug options are:

* `iommu=debug`: can be used to print additional diagnostic information about the iommu.
* `dom0=verbose`: can be used to print additional diagnostic information about the dom0 build process.
* `sync_console`: flag to force synchronous console output. Useful for debugging to avoid losing messages due to rate limiting. Never use this option in production environments since it can allow malicious guests to perform DoS attacks against Xen(TM) using the console.

FreeBSD should also be booted in verbose mode in order to identify any issues.
To activate verbose booting, run this command:

[source,shell]
....
# echo 'boot_verbose="YES"' >> /boot/loader.conf
....

If none of these options help solving the problem, please send the serial boot log to mailto:freebsd-xen@FreeBSD.org[freebsd-xen@FreeBSD.org] and mailto:xen-devel@lists.xenproject.org[xen-devel@lists.xenproject.org] for further analysis.

[[virtualization-host-xen-troubleshooting-guest]]
==== Guest Creation Troubleshooting

Issues can also arise when creating guests, the following attempts to provide some help for those trying to diagnose guest creation issues.

The most common cause of guest creation failures is the `xl` command spitting some error and exiting with a return code different than 0.
If the error provided is not enough to help identify the issue, more verbose output can also be obtained from `xl` by using the `v` option repeatedly.

[source,shell]
....
# xl -vvv create freebsd.cfg
Parsing config from freebsd.cfg
libxl: debug: libxl_create.c:1693:do_domain_create: Domain 0:ao 0x800d750a0: create: how=0x0 callback=0x0 poller=0x800d6f0f0
libxl: debug: libxl_device.c:397:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:432:libxl__device_disk_set_backend: Disk vdev=xvda, using backend phy
libxl: debug: libxl_create.c:1018:initiate_domain_create: Domain 1:running bootloader
libxl: debug: libxl_bootloader.c:328:libxl__bootloader_run: Domain 1:not a PV/PVH domain, skipping bootloader
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=0x800d96b98: deregister unregistered
domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
domainbuilder: detail: xc_dom_kernel_file: filename="/usr/local/lib/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap    : 326 kB
libxl: debug: libxl_dom.c:988:libxl__load_hvm_firmware_module: Loading BIOS: /usr/local/share/seabios/bios.bin
...
....

If the verbose output does not help diagnose the issue there are also QEMU and Xen(TM) toolstack logs in [.filename]#/var/log/xen#.
Note that the name of the domain is appended to the log name, so if the domain is named `freebsd` you should find a [.filename]#/var/log/xen/xl-freebsd.log# and likely a [.filename]#/var/log/xen/qemu-dm-freebsd.log#.
Both log files can contain useful information for debugging.
If none of this helps solve the issue, please send the description of the issue you are facing and as much information as possible to mailto:freebsd-xen@FreeBSD.org[freebsd-xen@FreeBSD.org] and mailto:xen-devel@lists.xenproject.org[xen-devel@lists.xenproject.org] in order to get help.