SAN - taking the shortest path

Published 28 August 2002

Authors: Jon Tate


This Tip discusses SAN - taking the shortest path.


Fabric Shortest Path First
According to the FC-SW-2 standard, Fabric Shortest Path First (FSPF) is a link state path selection protocol. The concepts used in FSPF were first proposed by Brocade and have since been incorporated into the FC-SW-2 standard. Since then, it has been adopted by most, if not all, manufacturers. Certainly, all of the switches and directors in the IBM portfolio implement and utilize FSPF.

What is FSPF?
FSPF keeps track of the links on all switches in the fabric and associates a cost with each link. At the time of writing, the cost is always calculated as being directly proportional to the number of hops. The protocol computes paths from a switch to all the other switches in the fabric by adding the cost of all links traversed by the path, and choosing the path that minimizes the cost. For example in the figure below, if we need to connect a port in switch A to a port in switch D, it will take the ISL from A to D.
Connecting port in switch A to switch D
The other possible paths are shown below.
Other possible paths among switches

It will not go from A to B to D, neither from A to C to D. This is because FSPF is currently based on the hop count cost.

How does FSPF work?
The collection of link states (including cost) of all switches in a fabric constitutes the topology database (or link state database). The topology database is kept in all switches in the fabric and they are maintained and synchronized to each other. There is an initial database synchronization, and an update mechanism. The initial database synchronization is used when a switch is initialized, or when an ISL comes up. The update mechanism is used when there is a link state change, for example, an ISL going down or coming up, and on a periodic basis. This ensures consistency among all switches in the fabric.

How does FSPF help?
In the situation where there are multiple routes, FSPF will ensure that the route that is used is the one with the lowest number of hops. If all the hops:

  • Have the same latency
  • Operate at the same speed
  • Have no congestion
Then FSPF will ensure that the frames get to their destinations by the fastest route.

What happens when there is more than one shortest path?
If we look again at the example in the first figure, and we imagine that the link from A to D goes down, switch A will now have four routes to reach D:
  • A-B-D
  • A-C-D
  • A-B-C-D
  • A-C-B-D
A-B-D and A-C-D will be selected because they are the equal shortest paths based on the hop count cost. The update mechanism ensures that switches B and C will also have their databases updated with the new routing information. So, which of the two routes will be used? The answer is that the decision of which way to send a frame is up to the manufacturer of each switch. In our case, Switch B and Switch C will send frames directly to Switch D. The firmware in Switch A will make a decision about which way to send frames to Switch D, either via Switch B or Switch C. The way that this decision is made is by a round robin algorithm based on the order of connection. Let us consider the situation illustrated in the figure below:
Decision based on order of connection
There are three servers A, B and C which all need to communicate with the storage devices D, E and F respectively (We are assuming that there is no zoning or trunking enabled, and that all of the links are operating at the same bandwidth. Let us assume that the three servers connect in the order A then B then C. Server A will be given a route from the upper switch to the lower switch. For the sake of this example, let us assume that it is via ISL1. The second server, Server B in the example will be assigned a route via ISL2. and the Server C will have a route via ISL1. This will have the result of sharing the load between the two switches over the two ISLs. We can see that some traffic will flow via each of the ISLs, but we must stress that this is not the same as load balancing; this implements load sharing but not load balancing.

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.

Follow IBM Redbooks

Follow IBM Redbooks