Head-to-Head: Parallels Desktop for Mac vs. VMware Fusion
Volume Number: 26
Issue Number: 01
Column Tag: Virtualization
Head-to-Head: Parallels Desktop for Mac vs. VMware Fusion
How do VMware Fusion 3 and Parallels Desktop 5 for Mac compare?
By Neil Ticktin, Editor-in-Chief/Publisher
< Previous Page
Start
| 1
| 2
| 3
| 4
| 5
| 6
| 7
| 8
Next Page>
Overview
We won't keep you in suspense. When we look at the major subgroups of our comprehensive test suite, Parallels is the clear winner running each group of tests 5-127% faster than VMware's solution. Overall, Parallels Desktop 5 runs 30% faster with Windows XP, and 43% faster with Windows 7, than VMware Fusion 3.0.1. Or see the graph if you are more visual (take note that on this graph, shorter is faster).
Figure 1: Overall Virtual Machine Performance
There are places that VMware Fusion is faster than Parallels Desktop 5. Of the 25 different graphics tests and scores, we saw definite patterns where VMware Fusion was faster in some tests related to High Dynamic Range rendering (HDR), Perlin Noise, and Pixel Shader (see the detailed 3DMark results to see these). However, Parallels was faster on the balance (the vast majority) of the graphics tests (including the 3DMark overall scores), and more importantly, didn't have many of the issues that VMware Fusion did. (See the graphics section below for more details.) That said, even with these specific individual tests running a bit faster, the graphics experience from a user point of view was noticeably faster (and therefore more visually appealing) under Parallels Desktop 5. The measurements that best represent the overall gaming experience show Parallels performing 81% faster under Windows XP, and 127% faster under Windows 7.
Figure 2: Overall Graphics Performance
Another way to look at this is with the color-coding on the results matrix. Green cell coloring means Parallels Desktop was faster than VMware Fusion. Blue cell coloring indicates VMware Fusion was faster than Parallels Desktop. Darkest coloring means faster by 10% or more, medium coloring indicates 1-10% difference, and lightest coloring means less than 1% difference. Those tests that could not be run due to lack of support from the virtualization software are shaded gray. (Note: Not all tests were run on all configurations, hence the empty cells.)
Figure 3: Test Results Matrix with Coloring
One thing to note. Both of these products are faster than their prior versions. In addition, the disk footprint (e.g., disk space used) was significantly lower for both. See MacTech articles evaluating each against their prior versions:
http://macte.ch/vmware3
http://macte.ch/parallels5
The Test Suite and Results
In the sections below, we'll walk you through what we tested, and the results for each. These tests are designed to arm you with the information so you can make the best decision for your type of use.
For each set of results, you can see the analysis for each model of computer for XP, and for Windows 7. If you want to see more detail for single vs. multiple processors, 32-bit vs. 64-bit, or on an individual Mac model, you can review the spreadsheet for those details.
For the launch tests (launching the VM, Windows, and Applications), we had the option of an "Adam" test, and a "Successive" test. Adam tests are when the computer has been completely restarted (hence avoiding both host and guest OS caching). Successive tests are repeated tests without restarting the machine in between tests, and can benefit from caching. Both mimic real use situations.
The tests used were selected specifically to give a real-world view of what VMware Fusion and Parallels Desktop are like to run for many users. We eliminated those tests that we ran which were so short in time frame (e.g., fast) that we could not create statistically significant results, or that had imperceivable differences.
For some of the analysis, we "normalized" results by dividing the result by the fastest result for that test across all the machine configurations. We did this specifically so that we could make comparisons across different groups, and to be able to give you overview results combining a series of types of tests, and computer models.
Instead of a plain "average" or "mean", overall conclusions are done using a "geomean", which is a specific type of average that focuses on the central results and minimizes outliers. Geomean is the same averaging methodology used by SPEC tests, PCMark, Unixbench, and others, and it helps prevent against minor result skewing. (If you are interested in how it differs from a mean, instead of adding the set of numbers and then dividing the sum by the count of numbers in the set, n, the numbers are multiplied and then the nth root of the resulting product is taken.)
For those interested in the benchmarking methodologies, see the more detailed testing information in Appendix A. For the detailed results of the tests used for the analysis, see Appendix B. Both appendices are available on the MacTech web site.
< Previous Page
Start
| 1
| 2
| 3
| 4
| 5
| 6
| 7
| 8
Next Page>