Calmatory, you're focusing on the wrong metrics. A tile based deferred renderer (TBDR) like SGX scales very differently with respect to both memory bandwidth and fillrate (texture accesses, per-fragment operations) than an immediate based renderer (IMR) does. Compared to an old DX6 or DX7 level card like TNT1, TNT2, or GeForce 1, you could say that SGX 530 might not work as hard but it works much more smartly. TBDR makes it so only the top-most opaque pixel and whatever translucent pixels are on top of it need to be rendered, which includes all of the fragment shading pipeline. What this also means is that you don't normally need depth or stencil buffers stored along the framebuffer, so they can be kept in very fast on-chip SRAM, and the framebuffer is cached per-tile allowing for quick bursting and aggressive prefetching. Even with alpha blending every framebuffer pixel has to be written only once (fire and forget).
So you should be able to see how a platform like this gets away with needing much lower bandwidth and pixel operations per second. About the only thing that needs to be fast are the depth comparators for performing the early-Z removal, which is why there are 8 or so of them. As an added bonus, you also get very high depth precision (32bit floating point, I think using 1/w) which helps prevent z-fighting.
The shader pipelines also look weak on paper since there are only two of them and they only support single issue 32bit (or maybe 40bit, I'm not altogether sure about this) operations. But this is mitigated by supporting 3/4-way SIMD over 10bit fixed point color formats, allowing color operations to still be vectorized. Texel blending/fogging/per-pixel lighting/etc operations will tend to dominate over per-vertex operations that need higher precision, and the reduced color range is perfectly acceptable for a device such as this, especially when you're comparing it to old non-HDR graphics cards to begin with. The efficiency of the USSEs is also boosted by supporting vertical multithreading (single-issue SMT), allowing very fast thread switching automatically to hide latency. Old fixed function archs were probably already designed to hide latency but this allows for much more effective throughput of shaders than would be possible otherwise.
There are of course some gotchas: you have to avoid alpha testing and multi-pass rendering to really use the renderer effectively. These things still work but they take a big toll on the efficiency. But if a game plays nice then it can make the rather low clocked/narrow platform go surprisingly far.
If you want a realistic comparison then look at the graphical quality of Sega Dreamcast games, and imagine something at least 2x stronger and with much more modern features/flexibility. Also imagine something that can support at least a few times as many polygons on screen.