Take an n-core processor.
For the simplest task, it takes t cycles.
When each core runs in parallel, it always take t cycles for this instruction to get done on one core.
And the CPU runs n tasks in t cycles as long as they're done in parallel. Otherwise, a set of n tasks will run in n*t cycles, even if they don't need each other and can run independently.
Introducing a t/n delay between each core, and a small instruction allocator running n times faster than the cores.
When it is not bypassed (when a code is not optimized for multi-core), the allocator sends every instruction to a different core, clockwise to the delay.
Thus, the cores are forced to run code in parallel, but it still gets an average of t cycles for a task to run.
A set of n independent tasks will not run in n*t cycles, but in just t cycles. And if some need the end of the others, they just don't run until these are completed, so it shouldn't take longer than without the allocator, but it may add up to t delay (or maybe this could exponentially increase ?).
So while a totally dependent program may take t cycles longer to run, if it could be optimized but hasn't been, it would run in up to n times faster with up to t delay, still much closer to the running time of optimized software.
Of course, that would mean deciding on-the-fly which code is supposed to be done after one another, and having speedy hardware just for this.
But our OSes do a similar thing when running parallel tasks AFAIK. It would be done on a smaller scale and managed by the hardware.
And it would mean that at its simplest, all cores run and at the same speed. But add a fair bunch of complexity and it could dynamically manage disabled and slowed-down cores.
Also, bypassing it would mean using the CPU just like any other one.