VMware
From Qmailtoaster
This page is under construction
Contents |
VMware Hosting Products
VMware offers a variety of hosting products. Our experience has been primarily with Server version 2, which runs on a variety of operating systems. We have primarily used CentOS version 5 as the host OS.
We expect that ESXi, which runs on bare hardware (no other OS required), would perform at least as well as Server does on CentOS, and probably a bit better, especially on more recent hardware that supports virtualization technologies.
Our understanding is that VMware Server and ESXi are much the same, with the exception of the underlying kernel and OS. As such, many of the performance tuning considerations, especially of the guests, are the same in both environments, and also have a good deal of commonality with VMware desktop products (Workstation, Player, Fusion).
Please refer to VMware's Server FAQs and ESXi FAQS for more details.
Our approach here is to treat everything as being compatible with current VMware hosting products, and to specify any differences where we're aware of them. While there will continue to be various versions of VMware hosting products, it's difficult if not impossible for us to take into account every version of every product. In other words, please understand that YMMV depending on your environment. If you experience something different than what's contained here, please update this page with your experience.
If you want to use QmailToaster using VMWare ESXi with pre-built images you can follow these steps:
- You need a computer runing something like ESXi ( free ) or Virtual Box (free )
- In ESXi, File, Delpoy OVF Template.
- Select the OVF file you are using., select next many times.
- Name your VM, eg: Qmail
- Select destination drive.
- Select thin, ( you may not have this setting )
- Leave VM Network as default
- Finish deployment of OVF
When deployed:
- Select your New VM in the ESXi menu, edit any setting to suite,( RAM CPU etc)
- Power on VM, Select console, login & use the qmail server.
- Or login to : http://youripaddress/qcontrol
If you grabbed the VM from my site http://techyguru.com You should also grab the PDF, it has all the passwords etc
If you still need the vmdk format( Vmworkstation & player ), I can make it avaliable also. Please contact Madmac <sysadmin at tricubemedia dot com>
VMware Tuning
We think it's important to understand the reasons why various tuning options have been chosen, and why they affect performance. That's why we've provided an explanation here of the various settings we recommend (and use). However, you may prefer to cut to the chase.
Host
CPU
Memory
Storage
1. The disk system is probably the largest bottleneck in any system. A good disk system is even more critical in a virtual system. There are several points to keep in mind when setting up a reliable disk system.
Drive types
- The best disks to use are fast SCSI but they are expensive!
- The most common today are the SATA family. Pay attention to faster spindle speeds. Right now there are 5400, 7200 and 10000 RPM drives. Get the fastest you can afford. The 10k are pretty pricey at the moment. If you use any sort of RAID, whatever you do, stay consistent! Don't mix speeds in a RAID array, your performance will be that of the lowest performing drive.
- Some people think they will get 2x the performance from a SATA3 that they do from SATA2. Not so, sorry... For the most part, the speed difference is in getting the data from the buffer to the machine. The speed to get the data from the platter to the buffer is only slightly better. That being said, you would still be better with the SATA3.
- At this point it's safe to say, stay away from and IDE drives. They were great for their time but that time has past.
2. RAID Like I mentioned earlier, disks are cheap. Running any server without some form of disk redundancy is just asking for trouble. There are many sources of information about RAID. There could be whole wiki's dedicated to RAID performance, setup, recovery, etc. I'm going to concentrate on the two most common, RAID-1 and RAID-5 and keep it basic.
- The easiest and best performing is RAID-1 (mirroring). This takes 2 drives and writes the data to both drives. It can read from either drive. If you didn't see it coming, this makes reads faster but writes are a bit slower. Your total capacity is the size of one drive. ie: 2-500G drives mirrored = 500G space.
- The most common is RAID-5. This is a great performer in very large environments because of the way it handles the error checking (parity) and data distribution (striping). On lower end and home built servers it won't give you any great performance. It's actually slower than RAID-1 in most cases. If you have a real hardware RAID card you can use RAID-5. For the most part, the cheap SATA RAID cards are really software based RAID and I don't recommend them for RAID5. Real SATA RAID cards are in the hundreds of dollars! For RAID-5 you need a minimum of 3 drives. Total capacity is the sum of all drives -1 drive. ie: 3-500G drives RAID-5 = 1000G.
3. Off host storage
- Getting the disks off your host can be a great benefit to a virtual system. It unloads the mundane RAID tasks onto a separate system all together so your host can concentrate on the guests and not have to worry about the disk housekeeping. It also give you the benefit of running any virtual guest from any attached host. This is great if you need to take a host down for maintenance.
CAUTION: DO NOT try to run a guest from 2 different hosts at the same time!!! Go read that last line AGAIN! You WILL corrupt your guest beyond reasonable repair (no maybe about it!) You MUST shut down the guest on one host before starting up on another. Also, pay attention to the autostarts so this can't happen accidentally.
- SAN (Storage Attached Network) To make this simple, if you have a SAN use it. Also, if you have it then you probably don't need an explanation. This is a very high end high performance system all it's own.
- NAS (Network Attached Storage) This is SAN's little brother but SAN better watch out, he's growing up! These include NFS and iSCSI. If you have the money, it's not to horribly expensive and there are several external storage options available. Make sure it's a NAS unit and can do NFS or iSCSI not just a network drive.
4. Recommended settings
- Most importantly, DO NOT store your virtual machines on the same drive as your host OS. The host disk will be busy with it's own housekeeping, swapping, etc. Put the VMs on seperate disks preferably RAID.
- /etc/fstab
tmpfs /opt/tmp tmpfs size=3G 0 0
- Used in conjunction with tmpDirectory = "/opt/tmp" in the /etc/vmware/config, this sets up the temporary file system for the memory mapped file created for each guest. This will put those files into memory. The size=3G is the amount of memory you want to allow the VM system to use for the temp files, the more the better. It can be expressed as a size or percentage. Some recommend the setting size=100%. If you browse the folder, you won't see the files as they are hidden and even showing hidden won't show them. There will be a couple small files in there so that will tell you it's working.
noatime,data=writeback,noacl,nodiratime
- There are some things linux looks at when it accesses files or folders that we don't care about for the drives holding the virtual machines. Why waste time on what we don't care about.
- noatime -Tells the host to not bother changing or looking at the last access times attribute of the files.
- noacl -Simply put, bypasses asking the server for permission to use the file and looks directly at the file permissions.
- nodiratime -Same as noatime but for directories. (not needed if you use noatime)
- data=writeback This needs to be researched more.
Network
VMware Server Config
The config file for the host is stored in etc/vmware/config. This is a simple text file with all of the parameters for the VMware Server host.
There are MANY settings in the config file. We are only presenting a few of our favorite items. PLEASE do not empty your config file and replace it with ours as ours is only a subset of the whole file. Also, backup your original config file before playing in here. To make any of these settings take effect, you must restart the vmware server service or reboot the host.
- /etc/vmware/config
- mainMem.partialLazyRestore = "TRUE"
- - If you use snapshots, this will allow them to be restored in background without having to shut down the guest.
- mainMem.partialLazySave = "TRUE"
- - Allows Snapshots to be created in background without shutting down the guest.
- mainMem.useNamedFile = "FALSE"
- - VMware server creates a .vmem file and without this setting, will put it in the same folder as your guest. This setting, in conjunction with the tmpDirectory = "/opt/tmp" will place the files into the /opt/tmp folder which is in memory.
- MemTrimRate=0
- - NOT GLOBAL
- prefvmx.minVmMemPct = "100"
- - Keeps all guest memory in RAM. You can select a lower % but it can cause swapping.
- prefvmx.useRecommendedLockedMemSize = "TRUE"
- - Windows only???
- sched.mem.pshare.enable = "FALSE"
- - Windows only???
- tmpDirectory = "/opt/tmp"
- - Puts all VM teamp files in here instead of in the same folder as the guest. This folder needs to be created as tempfs with the etc/fstab before you can user it.
Guest
CPU
SMP
Unless you need the resources that additional threading provides, you do not need to enable more than one CPU in the guest. Running more than one CPU in a guest brings in overhead that isn't needed and can actually slow the machine down. I know it's hard, but in the virtual world you really have to change the "bigger is better" thought process. Think of it this way; Your host is a quad core CPU running 1.2ghz per core. That's 4.8ghz of processing power at VM's disposal. VM does a very good job of balancing that load among the guests. If you only have 1 guest, it has the whole processor (minus the small host overhead) to use for the guest. Do you really think giving the guest 2 CPUs will do any better? Like I said, if you have an application that uses multiple CPU threads then you MIGHT see an improvement but we're talking about QMT here.
Timekeeping
The CPU is perhaps the most complicated part of the virtual system. In that, the CPU clock gets top billing for complexity. When you think about it, the OS is used to talking to a chip that is happily ticking away at a rate controlled by hardware. Nothing will change that rate of tick. In the virtual system, that "chip" is now a layer of virtualization or simply, another program. If your unvirtualized OS gets bogged down with heavy a load, the real CPU is still ticking away at the same rate but the programs run slower. If you bog down the host of a virtual system, the guest's virtual CPU (which is software now) is subjected to the same bog so the virtual CPU ticks slower. The virtualization layer usually takes care of this by adding extra ticks or using some other compensation method. If your guest kernel can deal with this, you're OK. If it can't, the most obvious symptom will be the guest clock drifting sometimes minutes or hours up or down. It can actually get so bad that the guest can cause the host to bog and lock up just because the host and guest kernels are not cooperating very well. Sounds like real life at times.
There are more than a handful of kernels available and the best setting depends on which kernel you're using. Refer to the VMware Knowledge Base - Timekeeping best practices for Linux guests page for the setting(s) you should use. The setting you use is added to the "kernel" line in your /boot/grub/grub.conf file. Of course you'll need to reboot the guest for the setting to take effect.
Also, when you subsequently update your kernel, it's a good idea before rebooting to check to be sure that the update retained your kernel settings on the new kernel line in the /boot/grub/grub.conf file.
In addition to adjusting kernel settings for timekeeping, it is recommended to always use ntp as well to be sure the clock stays as accurate as possible. Refer to the link above or the Nutshell section below for recommended ntp guest configuration settings.
When using ntp in a guest, you should disable VMware Tools periodic timekeeping as well. Details are in the link above.
Memory
There is a slight additional overhead on the virtual machine monitor (VMM) for 32-bit guests with more than 896MB of memory. Allocate no more than 896MB of memory for your QMT 32-bit guest for best performance. It's not clear at this point whether or not this overhead is diminished by hardware assisted virtualization.
I don't recall where that came from. In the real world, a 32-bit guest running Server v2.0.0 and VMware-Tools hit a ceiling of a little over 640MB, above which cpu utilization went up drastically, like by a factor of 10. Overall performance was pretty much unusable ('top' alone took 10% of the cpu). I'll be continuing to attempt to diagnose this problem. I'm guessing that it is in the Memory Management component of VMware-Tools, because a similar guest with no Tools did not experience this problem.
Here's what the Best Practices for VMware vSpehere-4.0 has to say:
For best performance, consider the following regarding enabling VMI support:
- If running 32-bit Linux guest operating systems that include kernel support for VMI on hardware that does not support hardware-assisted MMU virtualization (EPT or RVI), enabling VMI will improve performance.
- VMI-enabled virtual machine always use Binary Translation (BT) and shadow page tables, even on systems that support hardware-assisted MMU virtualization. Because hardware-assisted MMU virtualization almost always provides more of a performance increase than VMI, we recommend disabling VMI for virtual machines running on hardware that supports hardware-assisted MMU virtualization. (No kernel change is required in the guest, as the VMI kernel can run in a non-VMI enabled virtual machine.)
Storage
Raw Device Mapping
Raw devices are useful in some situations. They do not necessarily provide a performance boost over VMFS filesystems according to some documentation I've read (I wish I could remember where), although they appear to perform significantly better in some situations. I'll write more on this as I obtain hard evidence.
Raw device mapping exists in Server v1 (and ESX 2.5+ I think). Server v2 can run Raw Devices, but the ability to create/modify them is not yet in the Server v2 gui interface, as of v2.0.2. Rumor has it that this omission was simply due to developer workload, so there's reason to believe it will be included in the gui of some future version.
The following steps describe how to create a Raw Device Mapping in a guest machine using the CLI. The procedure is rewritten (copied largely) from Using Raw Disks with VMware Server 2, although we won't be using Server v1 to create anything for us. Why am I documenting this? Because there should be more than one place on the 'net that says how to do this, plus I don't want to forget.
- Create a working VM guest to which the Raw Disk will be connected.
- Install the Raw Disk in your VMware Server v2 host. Make a note of its device name. You should use it's /dev/disk/by-id/* name instead of it's /dev/sd? name, since the /dev/disk/by-id/* name will always be the same, whereas the /dev/sd? name can vary from one boot to another.
- Create the .vmdk file for the Raw Disk, in the guest's directory (see sample below as well).
-
version=1
| I'm not sure what this is the version of, but it's the same for Server v1 and v2, so just use it as is. -
CID=e8aa779f
| This is a random identifier for the drive. You can use "apg -a1 -x8 -MN" command to generate an identifier, or simply make something up. Note, you may use numeric as well as alpha digits, but only ones that represent hexidecimal values (0-9,a-f) appear to be valid, at least as far as the first position is concerned. -
parentCID=ffffffff
| This indicates that the disk is the main disk, and is not a snapshot. -
createType="fullDevice"
| This indicates that this is a Raw Disk. -
RW 1953525168 FLAT "/dev/disk/by-id/scsi-SATA_WDC_WD1002FBYS-_WD-WMATV5034836" 0
| The extent description consists of (3 to) 5 positional parameters. I haven't found any documentation of what all the possible values here are, but Peter Jonsson has identified the 2nd parameter. The number here is Total Number of Sectors on the drive, which is computed as the Total Number of Disk Bytes / Sector Size (usually 512). These values can be obtained from the fdisk command. The 4th parameter is the device name of the raw disk on the host machine. Using the /dev/disk/by-id/* name ensures that the name will not change across reboots, and allows the disk to be referenced using the same name on the guest as on the host machine. -
ddb.adapterType = "lsilogic"
-
ddb.geometry.*
| These settings can be obtained from the fdisk command. The "bios" settings apparently use different values for a partitionedDevice (not what we're doing here), but for fullDevice they should have the same value as their corresponding geometry settings. -
ddb.virtualHWVersion = "4"
| This needs to be the same HWVersion as the virtual guest. I upgraded a guest from version 4 to version 7 using the WebGUI, and the guest's associated raw disks were upgraded as well. If you're using version 7, see below for additional parameters.
-
- with the VM guest powered off, add the raw disk parameters to the guest's .vmx file:
-
scsi1.present = "TRUE"
| We're using a separate virtual adapter. If you already have a scsi1 adapter, use the next available number. -
scsi1.virtualDev = "lsilogic"
| I don't know why this is needed here, but it defaults to buslogic if you don't have it. -
scsi1:0.present = "TRUE"
| Our disk is the first device on the scsi bus. -
scsi1:0.deviceType = "rawDisk"
| I presume this is self explanatory. -
scsi1:0.fileName = "MyRawDisk.vmdk"
| This is the name of the .vmdk file you used in the previous step.
-
- If you're using hardware version 7, you should have these in the guest's .vmx file as well:
-
scsi1.pciSlotNumber = "35"
| Be sure this pciSlotNumber is not used by another device. -
scsi1:0.redo = ""
| I don't know what this does, but it was added when I upgraded from version 4 to version 7.
-
- Power on your guest machine. You should see a new scsi adapter, but you will not see the hard drive listed in the web-ui, as the raw device is not (yet) supported in the web-ui. When you've logged into your guest guest machine, you should see the raw device as /dev/sdX according to how the scsi devices on your guest machine are arranged. You should also see the raw device listed in the /dev/disk/by-id/ directory.
Here is a sample .vmdk file:
# MyRawDisk.vmdk (WDC WD1002FBYS-02A6B0) # Disk DescriptorFile version=1 CID=e8aa779f parentCID=ffffffff createType="fullDevice" # Extent description RW 1953525168 FLAT "/dev/disk/by-id/scsi-SATA_WDC_WD1002FBYS-_WD-WMATV5034836" 0 # The Disk Data Base ddb.adapterType = "lsilogic" ddb.geometry.cylinders = "121601" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.geometry.biosCylinders = "121601" ddb.geometry.biosHeads = "255" ddb.geometry.biosSectors = "63" ddb.virtualHWVersion = "4"
If you use virtual hardware version 7, you'll also have these lines:
ddb.virtualHWVersion = "7" ddb.encoding = "UTF-8" ddb.toolsVersion = "0"
Network
VMware Guest config
A few topics ago we showed you at the host level is the /etc/vmware/config file that controls the global settings for all guests and the host itself. There is also a file for each individual guest as well. You'll find that file in the same folder with the rest of the guest files. It will have the extension .VMX. Again, here we'll show you some of our favorite settings and an brief explanation about each. Don't forget to copy off your original in case you totally hose this one.
- autostop = "softpoweroff"
- - When you shut down the host, "softpoweroff" will tell the OS of the guest to shut down. "poweroff" yanks the power cord. Some OSs don't mind have power removed abruptly, most don't care for it.
- mainMem.useNamedFile = "FALSE"
- - This is the same as the host config setting of the same name. It can be listed globally there or per guest in the .VMX.
- MemAllowAutoScaleDown = "FALSE"
- - ??
- memTrimRate = "0"
- - Memory Trim rate allows the host to reclaim memory that isn't being used by the guest. Setting this to 0 tells the host to not reallocate the memory.
- mem.ShareScanTotal = 0 GLOBAL??, EXPLAIN
- - Specifies the total system-wide rate at which memory should be scanned for transparent page sharing opportunities. The rate is specified as the number of pages to scan per second. Defaults to 200 pages/sec.
- mem.ShareScanVM = 0 GLOBAL??, EXPLAIN
- - Controls the rate at which the system scans memory to identify opportunities for sharing memory. Units are pages per second.
- mem.ShareScanThreshold = 4096
- -
- sched.mem.maxmemctl = 0
- -
- sched.mem.pshare.enable = "FALSE"
- -
- workingDir = "/opt/tmp"
- - Specifies where the swap file for the VM will go. This only takes effect if swapping cannot be avoided by any means. This shouldn't be needed in optimal situations.
In a Nutshell
This section wraps up everything above, and shows the various settings according to where they go.
Host
Kernel
- /boot/grub/grub.conf
- apm=power-off
- elevator=deadline
- nohz=off
- /etc/sysctl.conf (/proc/sys/vm):
- dev.rtc.max-user-freq = 1024
- vm.dirty_background_ratio = 5
- vm.dirty_expire_centisecs = 1000
- vm.dirty_ratio = 10
- vm.overcommit_memory = 1
- vm.swappiness = 0
- /etc/fstab
- tmpfs /opt/tmp tmpfs size=3G 0 0
- noatime,data=writeback,noacl,nodiratime
VMware
- /etc/vmware/config
- mainMem.partialLazyRestore = "TRUE" (i386 only?)
- mainMem.partialLazySave = "TRUE" (i386 only?)
- mainMem.useNamedFile = "FALSE"
- MemAllowAutoScaleDown = "FALSE"
- MemTrimRate = "0"
- prefvmx.minVmMemPct = "100"
- prefvmx.useRecommendedLockedMemSize = "TRUE"
- sched.mem.pshare.enable = "FALSE"
- tmpDirectory = "/opt/tmp"
Notes: You can use # to start a comment line to document your work. Also, there is no particular order to any lines in the /etc/vmware/config or .vmx files.
Guest
Kernel
- /boot/grub/grub.conf
- elevator=noop
- noapic
- nolapic
- nosmp (single cpu only)
- timing setting(s) (varies by kernel)
- /etc/sysctl.conf (/proc/sys/vm):
- vm.swappiness = 0
VMware
- *.vmx
- autostop = "softpoweroff"
- mainMem.useNamedFile = "FALSE"
- MemAllowAutoScaleDown = "FALSE"
- memTrimRate = "0"
- mem.ShareScanThreshold = 4096
- mem.ShareScanTotal = 0
- mem.ShareScanVM = 0
- sched.mem.maxmemctl = 0
- sched.mem.pshare.enable = "FALSE"
- workingDir = "/opt/tmp"
Notes: You can use # to start a comment line to document your work. Also, there is no particular order to any lines in the /etc/vmware/config or .VMX files.
ntp
- /etc/ntp.conf
- tinker panic 0
- restrict 127.0.0.1
- restrict default kod nomodify notrap
- server 0.vmware.pool.ntp.org
- server 1.vmware.pool.ntp.org
- server 2.vmware.pool.ntp.org
- driftfile /var/lib/ntp/drift
- # server 127.127.1.0
- # fudge 127.127.1.0 stratum 10
Note: The directive tinker panic 0 must be at the top of the ntp.conf file.
Note: the last 2 directives should be commented out if they exist.
- /etc/ntp/step-tickers
- 0.vmware.pool.ntp.org
- 1.vmware.pool.ntp.org
Security
VMware security: Protecting a VMware environment (registration required) contains a nice overview of VMware security. It recommends the following settings be used in the guest vmx file to harden security:
- isolation.tools.connectable.disable = "TRUE"
- isolation.tools.copy.enable = "FALSE"
- isolation.tools.diskshrink.disable = "TRUE"
- isolation.tools.diskwiper.disable = "TRUE"
- isolation.tools.log.disable = "TRUE"
- isolation.tools.paste.enable = "FALSE"
- isolation.tools.setInfo.disable = "TRUE"
- isolation.tools.setguioptions.enable = "FALSE"
- log.rotatesize = 100000
- log.keepold = 10
- tools.setinfo.sizeLimit = 1048576
Of course, if you don't run VM Tools (which is entirely feasible with QMT), the tools related parameters are ineffective.
Snapshots
One of the greatest assets of Virtual machines is the ability to take snapshots. The problem is that most tuning sites tell you not to run with snapshots but they don't explain what they mean. Therefore, people tend to shy away from them thinking there might be some detriment to their server. The statement to not run with snapshots is true but don't NOT use them. I'll explain why.
A snapshot is exactly what it sounds like. It's a picture of your machine at that time.
The way snapshots work is when you take a snap, it stops writes to the VMDK and starts writing all changes to a *snapshot.vmdk file. That is ALL changes get written forward with NO deletes. What I mean is that if you are running a snap and copy in 2 meg file the snapshot file grows by 2 meg. Now, if you delete that file, the snapshot file DOES NOT shrink by 2 megs, it merely marks the file deleted. It continues to use the original VMDK but only for reads. Now you see where the rub is, that file GROWS!! The sooner you get out of the snapshot the better.
There are a few terms related to snapshots that can be confusing.
- Take Snapshot - This one is easy, it creates a snapshot of the current machine
- Remove Snapshot - This rolls all changes into the original VMDK. and deletes the snapshot file.
- Revert to Snapshot - This sends the machine back to the original state it was at when you took the snapshot and deletes the snapshot file.
You don't run with a snapshot in the background. However, before you do any updates, you take a snapshot then perform your updates. If the updates run ok then you remove the snapshot and you're back to normal. If the updates blowup, you revert the snapshot and you're back to the way you were before the updates.
So now you see, using snapshots before installing service packs, OS updates, QMT updates, etc. is a good idea.
Now the cautions:
- If you are using a snapshot for mail server updates, shut down the mail services. If you don't and have to revert, you'll have some mail that will be sent through the time machine and others that will get lost. This could be true of a lot of services like databases, web sites, etc. Just take a minute to think before you take the snap.
- If you are running a Windows domain controller in a VM, DO NOT snapshot the DC. If you have to revert, it will be all out of sync with the other DCs and you can seriously corrupt your AD. It is NOT the same as a tape restore, trust me, I have the T-shirt, empty coffee cups and the documentation from MS.
Backups
Any discussion of backups regarding a QMT install would be remiss without including mention of the QMail Toaster Plus backup and restore utilities. If all you need to backup is the mail, configs and database of QMT, then this is your answer. It will require less storage space and with a small amount of work, you can even use it to recover a specific file or files.
If your goal is to backup the entire virtual machine, then the best way is to leverage the snapshot system. The critical files we need to backup are the VMDK/s and the VMX. The rest are only logs and temporary files. If you read the section on snapshots above, then you saw that when you snap a machine, it puts the VMDK files in 'read only' mode. That's what we're depending on here because you can't copy them while they are in read/write. This script will take a snapshot of your machine then copy the VMDKs and the VMX files to the backup location. While this is happening, your machine is happily running with all file activity going to a special snapshot file. When the copy is done, it then removes the snapshot and compresses the copy. You don't even need to shut the guest down! You can schedule this with a cron job and even set the retention time (How many backups to keep.). The drawback is that if you need to recover only a file or two, you would have to mount the VMDK with another machine and retrieve the files. It's not hard when you understand it but the first time can be unnerving.
This backup script isn't a QMT piece and isn't maintained by the QMT team. Rather than copying it here, I'll link to the page which includes the instructions for it here.
References
- sanbarrow.com's VMX-file Parameters pages are an indespensible reference for editing VMX file parameters.
- fewt@blog:~$ - Performance Tuning VMWare Server on Ubuntu Linux
- VMFAQ.com Knowledgebase - I need more performance out of my VMware environment
- VMware Book - Performance Best Practices for VMware vSphere 4.0
- VMware Communities - Performance Evaluation of AMD RVI Hardware Assist
- VMware Communities - Performance Evaluation of Intel EPT Hardware Assist
- VMware Communities - Performance tuning in Server 2.0
- VMware Communities - Tips for Improving Performance On Linux Host
- VMware Communities - VMware Products and Hardware-Assisted Virtualization (VT-x/AMD-V)
- VMware FAQs - ESXi
- VMware FAQs - Server
- VMware Information Guide - Timekeeping in VMware Virtual Machines - ESX3.5/ESXi3.5, Workstation 6.5
- VMware Knowledge Base - Choosing a network adapter for your virtual machine
- VMware Knowledge Base - Installing and Configuring NTP on an ESX host
- VMware Knowledge Base - Poor Network Throughput Between Virtual Machines on the Same ESX Server Machine
- VMware Knowledge Base - Timekeeping best practices for Linux guests
- VMware Knowledge Base - Troubleshooting hosted disk I/O performance problems
- VMware Knowledge Base - Troubleshooting virtual machine performance issues
- VMware Performance Study - Large Page Performance - ESX Server 3.5 and ESX Server 3i v3.5
- VMware Performance Study - Performance Characterization of VMFS and RDM Using a SAN - ESX Server 3.5
- VMware Performance Study - Performance Comparison of Virtual Network Devices - ESX Server 3.5
- VMware Performance Study - Performance Evaluation of VMXNET3 Virtual Network Device - vSphere 4 build 164009
- VMware Performance Study - PVSCSI Storage Performance
- VMware Performance Study - Scalable Storage Performance - ESX 3.5
- Virtuatopia.com - VMware Server 2.0 Essentials (online book)
- VMware Technical Note - Installing and Configuring Linux Guest Operating Systems
- Centos HowTos Disk Optimization
Documentation