Two popular myths concerning parallel programming are: (1) there is a ``best'' parallelization for a given application on a given class of machine and (2) loop-based data parallelism captures all useful parallelizations. We challenge these myths by considering alternative parallelizations of combinatorial search, examining the factors that determine the best-performing option for this important class of problems. Using subgraph isomorphism as a representative search problem, we show how the density of the solution space, the number of solutions desired, the number of available processors, and the underlying architecture affect the choice of an efficient parallelization. Our experiments, which span seven different shared-memory machines and a wide range of input graphs, indicate that relative performance depends on each of these factors. On some machines and for some inputs, a sequential depth-first search of the solution space, applying data parallelism at each node in the search tree, performs best. On other machines or other inputs, parallel tree search, with no data parallelism, performs best. In still other cases, a hybrid solution, containing both parallel tree search and data parallelism, works best. From these experiences we conclude that there is no one ``best'' parallelization that suffices over a range of machines, inputs, and precise problem specifications. As a corollary, we argue that programming environments should not focus exclusively on data parallelism, since parallel tree search or hybrid forms of parallelism may perform better for some applications.