We see this transaction a lot:
User: The network is slow.
Admin: Hmm, what makes you think that?
User: I have to wait too long for {something} to finish, so that I can finish my task and do other work.
Admin: Right! I'll just turn up the network speed then ... how's that?
User: Aha, that fixed it! Thanks!
If only it were that easy, eh? Anywhere we're dealing with users and computers these days, it seems we have this sort of issue with client/server applications. And by client/server, I mean basically anything that communicates with another computer to do a job. Web applications (or just normal browsing), email, thick client applications, even saving a Word doc back to your home directory on the file server.
So let's have a look at a general approach to finding the bottleneck in our client/server applications!
Map it Out
The first step is to assemble a map of the systems involved in the slow transaction. Here's a very simple transaction: opening a word document that's on a file server.

It can't really get much more simple than this, although you should note that I am leaving out any representation of the DNS lookup process. Here we have essentially three places to look for bottlenecks: the client, the server, and the network between them.
Now, just for comparison's sake, let's look at a diagram for a slightly more complex transaction. Here we have 6 steps and three servers necessary to complete the transaction:

1. Client asks Webserver1 for the page.
2. Webserver1 must ask the database (DBServer) for some of the data on the page.
3. DBServer returns that data to Webserver1.
4. Webserver1 then sends the formatted page to the client.
5. But within the formatted page is a link to an image that's on Webserver2. Client asks Webserver2 for that image.
6. Webserver2 sends the image back to the client.
Viola! The page is rendered. We now have oppotunities for bottlenecks in many more places: any one of the 4 endpoints shown could be having troubles. Additionally, the network between any two endpoints could be a problem. I left out the little network cloud symbols, but they can be safely assumed.
In a lot of cases, you can draw the entire transaction map in your head, and start working through the possibilities at each endpoint and network link. But the two maps I've just shown are actually some of the simplest sorts of transactions you're ever going to have to troubleshoot. It can get a lot more complex in a very short time. So you will be doing yourself a big favor to draw the map, even when you are sure you've got you mental map down to perfection. The very excercise of drawing it will often spark new comprehension of the issue which can lead to that Aha! moment you're trying to have.
I
f you need to collaborate with other people, I cannot emphasize this enough: draw the map, no matter how simple it is. Communication will speed up drastically, and misconceptions will be avoided. The ability to simply point to the thing you are discussing is a monster time-saver. Think of all the times you've been given "simple driving directions" which have led to you going in circles in strange neighborhoods. Maps help. A lot. Enough said.
As time goes by, you'll end up with a nice collection of maps that you can re-use in other troubleshooting scenarios. And people will respect your map-making authoritay.
Add basic information to your map
Now that we have our map, we might as well give it some of the important data we'll most likely need. The main things to get are:
This can be on the same map, or another map that serves as an overlay. I'll do an overlay map of the webserver scenario above:

With any luck at all, this information is not hard to collect, and it will be extremely helpful as we work through our troubleshooting steps.
Eliminate the network itself.
Now that we have our map and basic information, it's time to do some basic testing of the network links between the boxes. First we'll do a little ping testing:
C:\>ping -n 10 -l 32 10.0.0.2
Pinging 10.0.0.2 with 32 bytes of data:
Reply from 10.0.0.2: bytes=32 time=1ms TTL=128
Reply from 10.0.0.2: bytes=32 time=1ms TTL=128
Reply from 10.0.0.2: bytes=32 time=1ms TTL=128
Reply from 10.0.0.2: bytes=32 time=1ms TTL=128
Reply from 10.0.0.2: bytes=32 time=2ms TTL=128
Reply from 10.0.0.2: bytes=32 time=3ms TTL=128
Reply from 10.0.0.2: bytes=32 time=1ms TTL=128
Reply from 10.0.0.2: bytes=32 time=1ms TTL=128
Reply from 10.0.0.2: bytes=32 time=1ms TTL=128
Reply from 10.0.0.2: bytes=32 time=1ms TTL=128
Ping statistics for 10.0.0.2:
Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 3ms, Average = 1ms
Here we have sent ten ping packets (-n 10), each 32 bytes long (-l 32), from our current host to the server at 10.0.0.2. We see that they returned in one millisecond on average. In future examples I'll print only the ping command and the average ping time.
The one millisecond average round trip time means we have low latency. Latency is the amount of time it takes a packet to traverse the network. Note that round trip time (RTT) is the amount of time it takes for the packet to be sent to target and returned. Latency, though, is not the same thing as bandwidth, which is the maximum amount of data we can send per unit of time. Bandwidth is usually measured in bits per second. The above ping command doesn't do a good job of measuring bandwidth, because it sends such a small amount of data. But we can do a little better by using a larger ping packet:
ping -n 10 -l 12500 -w 60000 10.0.0.2 returned Average = 23ms
Here we used a packet length of 12,500 bytes or 12.5 kilobytes
gaaahh. I was unable to locate a simple formula for estimating bandwidth from ping times. Looking for a utility to do so ..