1) Maybe enable sar (system activity report) and let it run for a number of days. That way you could at least confirm/reject any cpu/mem/disk io/swap/network bottlenecks at certain times of the day. A pattern could emerge from enough sar data. It helped me a lot of times.
2) You could also check for runaway child processes, (i.e. that are taking too much cpu time, dead or looping ppid (using strace, ps etc.) iostat, vmstat etc.
3) running virtual machines and doing simultaneous disk (write) access? virtualised environments disk access easily kills i/o if using standard disk configs.
4) Is internet access on a sync or async line? async could have very low upload bandwidth (which everyone will be using during downloads) and could lead to connectivity issues.
5) Using apache? Number of http worker processes? Max connections per http server process? Script timeout periods?
6) Any file system full?
7) fragmentation? (memory, file system) a reboot once in a while helps on very very busy systems (for memory fragmentation, even if linux is famous for long uptimes.)
8) enormous open log files? Just an apache restart can do wonders sometimes
9) sync, sync, sync. flush out anything pending from mem to disk.
These are all things that can be checked (off the top of my head). If any of them rings a possibility bell.....maybe someone can check these and more, unless you've already done all of them