PC video cards generally take over responsibility of rendering the display to VGA/HDMI/DVI etc, so programs are used to telling it what to draw and letting it do it. That means the card can choose to rotate it and send the pixels out in a different way to how the CPU rendered them using its GPU pipelines. Using an increasingly general-purpose GPU for that is a little wasteful, but we're generally not that concerned about that sort of thing, even on laptops where people are used to big batteries and short run times.
Android apparently does something similar, but it doesn't quite work the same way AIUI, as the GPU is not responsible for rasterising and rendering graphics on ARM platforms - it instead takes the bitmap data in memory, fiddles with it and writes it back. That's more powerful, as the CPU can then access the result if it wants, less wasteful, as it doesn't require its own bank of RAM, but uses up memory bandwidth in both reading and writing to RAM.
As I understand it, this rotation chip solution is a little like the x86 PC solution, in that it pulls bitmap data from memory and renders the output to HDMI/Mipi, possibly rotated, colour corrected etc. It doesn't maintain its own framebuffer like PC GPUs though, and is less general purpose, so it's less expensive to run. It does mean we can't pull values out of the display end of it, I think, but given the simple nature of the transforms the chip does, we probably don't lose too much doing that.