So like a SIMD or better SPMD (Single Process Multi Data) Archtitecture? Interesting idea. Have to look deeper at this. Sounds interesting, would be nice, to get my software render a BIT faster.
Something like that. There's a lot you could do to get what would likely be significantly better performance, but you'd have to be prepared to rewrite a lot.
An example would be something like this:
1) Load depth samples from depth buffer for the polygon
2) Calculate X/Y coordinates for the pixels in the polygon, store in a buffer (only include ones that aren't clipped)
3) Use X/Y coordinates to calculate interpolated u, v, z, w using gradient values calculated during triangle setup. If you're using perspective correct rendering it makes more sense to first calculate barycentric coordinates then use those to interpolate. It's possible to also calculate barycentric coordinates regardless so you don't need two routines to perform the interpolation.
4) Perform depth test, and either store a mask to use during writeback or compress the pixel stream to remove depth failing pixels
5) Convert u/v to texture addresses/indexes
6) Load texels (this part can't use NEON)
7) Blend texels against blend color
8) Store calculated Z to depth buffer
9) Store texels to depth buffer
Some stages might make more sense to combine, others to separate.. it all depends on how nice the flow fits into a single function. If you do too much stuff in a loop iteration there won't be enough registers, especially if extra registers are needed to unroll the loop to try to hide latency which is often the case with NEON.
You'd probably not want to do this for entire polygons over a particular size, you really want the buffers to fit in L1 cache. So likely no more than a few hundred pixels at a time.
You can add in stages for loading color buffer pixels and performing blending, or alpha testing, etc.
Another thing I'd recommend is tiling the screen so that the current depth buffer and color buffer at least fits in L2 cache. Then write back the color buffer for each tile when you're done. You shouldn't need to write back the depth buffer unless your API exposes it, in which case hopefully you can know ahead of time whether or not it's needed. Tiling does require a scene grabber, so if the users expect to be able to render a small number of polygons then immediately read back the results several times during a frame the tiling will make it a lot slower, or difficult to support at all.
If you do have some form of early-Z testing (or even better, hierarchical Z-testing) it may make sense to do a depth-only pre-pass. Especially if alpha test isn't enabled.