Recreating imperfections shouldn't be the goal. But I still think that the architecture at the most basic level is still so different that it would be counter productive. I think if you had a large grid of reprogrammable processors in several larger systems that are hard coded for funtion would be best and better mimic how our brains work at that level. There is large portions of the brain that wouldn't need replicated as its function is more efficiently done in today's embedded systems. Things like temperature regulation has a need, but not as complex as what our brain requires because of the lack of organics. Also entire subsystems that regulate breathing, blood pressure, heart beat, digestion etc aren't needed and could be replaces with adequate cooling voltage regulation. I'm not sure what percentage of the brain is dedicated to the things that wouldn't be needed anymore but I'd imagine its significant. Point being getting it closer is needed, while matching it exactly should not be a goal.
Now the question remains that if such a system could be emulated in software, similar to how throwing processing power as a translation for emulation between one architecture to another. With so many "cores", magnitudes more than the hand full in what we currently emulate, I personally think that it would be processor prohibited to work at the same scale we do. I strongly feel new specialized hardware that is at least similar to how we are setup rather than matching it bit for bit would be the best.
Now the software side you obviously have more experience than I do, but instead of trying to recreate every subroutine that makes up an adult or even toddler human's mind would be quite the monumental task. But taking another nod at biology create a system that is almost entirely a learning machine. One that can even program parts of its processing to more efficiently process input. It should also have a set of hard coded rules that is mostly low level function and describes to the whole system how to communicate with the other systems at a general level, how to actually program its reprogrammable processors to better receive and control its self so it doesn't constantly crash its self unrecoverably trying to make routines. how to receive input but nothing coded on what to do with it except how to store and compare information. Leave that last portion up to the machines construct of reality as to what is actual garbage information. Throwing out garbage or non logical comparison information, try to form a construct of the reality outside it given the input its given. Rules on survival and learning for adapting for self improvement as that is also how we are built. A mix of hard coded "instinctual" memory along with a vast ability to learn and construct patterns of input. That is what I feel would be the most successful in recreating intelligence, artificial intelligence would I guess wouldn't be appropriate if you the system is coded to actually be intelligent.
I've always wished to make a learning program, and as a proof of concept I may just do it someday at a much smaller scale, but to do it right I think a system as I was describing would be ideal for the end goal.
We can split this discussion off if needed, its getting more and more off topic.
Edit: interestingly enough, nvidia very recently had a conference that exactly in line with this conversation. It is quite the coincidence actually. Very interesting watch and dwarfs the scale of the problem I had imagined, we are very far off from full scale hardware capability wise.
https://www.youtube.com/embed/37Yt41ouaNM?feature=oembed