As everything beyond the decoding phase is essentially handled by independent microcode anyway, it really shouldn't matter that much as long as the instructions don't clog the pipeline unnecessarily.
The basic 32bit ARM instructions really exploited the available range of values, they are incredibly versatile and therefore allow for very densely packed code - they were very clean and logically structured as well, which greatly eases the workload of decoding them. It made extensions quite a PITA, though, as the essential ISA hardly left any gaps for more instructions. The ARM64 ISA is extremely condensed compared to the old 32bit ISA, the instructions are very basic and you often need a lot more code to do the exact same thing.
The ARM is a RISC chip, and at least in 32-bit mode traditionally is designed to use only 1 clock cycle for every instruction beyond accesses to non-cached memory. I can't find a statement saying that all 64-bit instructions do the same, but I'd kind of assume it's the same. The pipeline will stall when you ask it to load an address it hasn't cached already, and it'll be invalidated by a branch it didn't expect, but beyond that it runs a lot more stably than other chips with variable length instructions.
The 64-bit mode does eliminate the ability to conditionally evaluate most instructions other than branches. But that was introduced because originally all branches would invalidate the pipeline, but since then branch prediction has revolutionised the model. In my experience conditional instructions are only used for if statements with short consequent and alternatives such as:
Code:
if percent<11 then percent=0 else percent=percent*2
(in pseudocode) could be written in ARM32 as:
Code:
CMP R0,#11 // If R0 compared with 11
MOVLT R0,#0 // is less than, then replace it with 0
ADDGE R0,R0,R0 // else double it
In 64-bit ARM you'd have to do something like:
Code:
CMP X0,#11
BGE .else
MOV X0,#0
B .end
.else
ADD X0,X0,X0
.end
...
(using 32-bit style instructions that I'm not sure are valid any more). These days, if it's likely to happen or not, the chip if able to either assume the conditional branch happens or doesn't. It'll reset the program counter to read the post branch instruction if it's likely to take that branch without invalidating the pipeline. In this specific simple example, I assume the branch instructions even if they're predicted correctly will still take one place in the pipeline, and thus take a clock tick to be dealt with (and actually evaluated in case the branch prediction got it wrong). But if you have any more instructions in either places of the it-then-else structure, and branch prediction successfully guesses the outcome, then that's only one instruction you need to process, rather than however many invalidated instructions you're told to skip. Traditionally, in 32-bit mode, you'd do it more like the 64-bit example anyway, if the number of skippable instructions in either case were more than your pipeline length less one, because in that case,the time taken to rebuild the pipeline would be less than the time taken to skip all of those invalid instructions. Branch prediction has reduced that need to somewhere between one instruction and the length of the pipeline less one depending on how effective it is.
Edit: There's also the dropping on certain bit rotation effects on specific instructions. Rather than every instruction being able to apply any rotation on the last operand different instructions only permit certain rotations. I don't know why that's happened, and I've not looked into the instruction packing yet so that might help explain what they've done here. Otherwise seems weird, other than I never used the ROL and ROR forms without explicitly knowing what I was doing.