Yes, x86 ISA and the following one add all the new instructions to the main ISA using the extra bits to add commands. That complicates microprocessor design because it doesn't know beforehand how many bits to fetch; could be 8 for the 8086 ISA, could be 16 for 386, 32 for 486 or 64 for AMD64. It probably fetches the full 64 bits and if it turns out it didn't need to rolls back the program counter and only interprets the bits found it needs to, but that's a complicated operation to do every instruction fetch.
You are partially wrong and partially correct
ARM as being a RISC ISA, have all machine instructions with same length (32 bits in normal mode or 16 bits in secondary Thumb mode). x86, as being very old (in origin) CISC ISA, shows opposite way: instruction length vary from 1 to 4 bytes (i.e. 32 bits). And this makes decoding, executing, more expensive and least efficient.
In x86 world an old 8086 can fetch instructions of 1 byte, others of 2 bytes long... others 4 bytes long (32 bits), and that instructions may operate on bytes or words (2 bytes). On a 32 bit x86 (IA-32) CPU the same, but instructions can be even longer, and they can operate on byte, word or doubleword (32 bit). And in x86-64¹ even more...
But even without 32 and 64 bit extension, only with original 8086 design, it was yet a nightmare: instruction longitude and format vary a lot of (1, 2, 3 and 4 bytes long). But it saved memory in a time when it was expensive and relatively small. Even Intel was going to abandon x86 because they knew memory would be cheaper (in fact Moore's law came from Gordon Moore, co-founders of Intel), but its big next architecture, object oriented iAPX432, was a total commercial and performance failure. IBM chose x86 for its IBM PC (original IBM PC used 8088, a "downgraded 8086" with only 8 bit external data bus), saving Intel and creating the need to evolve x86 with more and more eyesore to a yet ugly ISA.
I hate x86 ISA, because I programmed for it (in assembly).
¹: Note x86-64 isn't called IA-64 as x86-32 is IA-32 (Intel Architecture-32). It is because IA-64 (Intel Architecture-64) is Intel Itanium, because Intel was thinking with 64-bit we would forget x86 and would go beyond to a new EPIC ISA. The history is that Itanium was not so good, and was late, and AMD went 64 bits with x86, so Intel needed to follow AMD and copied x86-64 from AMD. So yes, Intel wanted to forget x86 twice in his history, but they failed in both
ARM instead since the Thumb ISA was invented added a flag to the status register that tells the chip what mode to be in, so that the chip can just run with that mode until the next time supervisor mode is entered. I'd argue that it's more efficient to do everything in the biggest mode you can be in. Usually that's the case because it gives you access to bigger registers, but at least in the slightly artificial case where you have a small number and you want to add 2 to it, it's most efficient to do that in thumb mode if you only need 16-bits for your data.
When you don't need best performance, Thumb can be better because it saves memory (in storage and execution).