In Part I of this series, I discussed some design options for a virtual infrastructure (Traditional Rackmount, Converged Rackmount, and Converged Blade). Using the Converged Blade option as the model going forward, we’ll explore the individual components of this solution in more detail. This post will explore the Compute Platform (UCS B-Series) in more detail.
Let’s start with the “brains” of the UCS B-Series, the 6100 series fabric interconnects.
6100 Series Fabric Interconnects:
Interconnect / Module Options:
- 6120XP – (20) 10gbE and FCoE capable SFP+ port Fabric Interconnect with a single expansion module slot
- 6140XP – (40) 10gbE and FCoE capable SFP+ port Fabric Interconnect with two expansion module slots
Expansion Module Options:
- 10Gbps SFP+ – (6) ports
- 10Gbps SFP+ – (4) ports, 1/2/4 Gbps Native Fibre Channel SFP+ – (4) ports
- 1/2/4 Gbps Native Fibre Channel SFP+ – (8) ports
- 2/4/8 Gbps Native Fibre Channel SFP+ – (6) ports
Below is a diagram of the UCS 6120XP labeled with the different ports:
A redundant UCS system consists of two 6100 series devices. They are connected via the cluster ports to act in unison. If a port or the whole 6100 series were to fail, the other would take over.
The 6100 series fabric interconnects provide 10Gb connectivity to the UCS 5108 via a module called the 2104XP fabric extender. A single pair of 6100 series fabric interconnects can manage up to twenty chassis depending on your bandwidth needs per chassis.
5108 Chassis (8 blade slots per chassis):
As you can see from the diagram you have two 2104XP Fabric Extenders per chassis. Each of the 2104XP’s have four 10Gb ports for a total of up to 80Gbps of throughput for the chassis. So, there is plenty of bandwidth and the added benefit of fewer cables and consequently, easier cable management. The only cables that will ever be needed for the back of the 5108 chassis are up to eight cables for connectivity and up to 4 cables for power.
Since the bandwidth needed for the external network is calculated at the Fabric Interconnect level, all that is needed at that point is to calculate the computing needs for the workload (CPU and RAM). This is where the blades themselves come in.
Blade Options:
- Full-Width Blades (B250 M1, B440 M1) take up two chassis slots each
- Half-Width Blades (B200 M2, B230 M1) take up one chassis slot each
- You can have a combination of blades in the same chassis up to eight chassis slots
- The blade processor configurations range.
- The B2xx blades have either 4, 6, or 8 core processors in a dual socket configuration
- The B440 M1 can hold up to (4) sockets of the Intel 7500 series 8 core processors
- The B250 M1 holds up to 384GB of RAM in a full-width form factor
- The B200 M2 holds up to 96GB of RAM in a half-width form factor
- The B230 M1 holds up to 256GB of RAM in a half-width form factor
- The full-width servers can hold up to (2) mezzanine cards for connectivity. Each card has (2) 10Gb ports for connectivity. That’s 20Gbps per card.
M81KR Virtual Interface Card:
The M81KR Virtual Interface Card deserves a special mention. This mezzanine card is capable of dividing the (2) 10Gbps ports into a combination of up to 56 virtual Ethernet and Fibre channel ports. This way you can manage port isolation and QoS for your blades like you may be used to in a traditional rackmount virtual infrastructure. As these posts continue, we will explore why this functionality may be needed for the virtual infrastructure when using the Converged Blade Infrastructure Model.
This post explored some of the Compute Platform Hardware components. The next post in this series will explore some of the software components and management that make the UCS compute platform ideal for a structured virtual infrastructure that can scale incrementally and be managed easily.