What are the characteristics of smb/cifs? (select 2 answers)

Storage network protocols enable applications, servers and other systems to interface with storage across a network. They also make it possible for users to share files and for organizations to support greater storage capacities than can be easily achieved with direct-attached storage.

A storage network protocol provides a standard set of rules that define how data is transmitted between devices. Systems such as network attached storage (NAS) and storage area networks (SANs) rely on storage protocols to facilitate data communications. Cloud storage platforms also use protocols to provide access to their data repositories.

Here are seven of the most common protocols used to support networked storage.

ISCSI is a transport layer protocol that provides block-level access to storage devices over a TCP/IP network. The protocol works on top of the TCP and describes how to transmit SCSI packets across LANs, WANs or the internet. ISCSI enables IT to set up a shared storage network such as a SAN.

Organizations often turn to iSCSI because it uses standard Ethernet technologies, making it cheaper and easier to adopt than Fibre Channel (FC). ISCSI can deliver high speeds across long distances, taking advantage of multipathing, jumbo framing, data center bridging (DCB) and other technologies. SAN implementations based on iSCSI now support data rates as high as 25 Gigabit Ethernet, with 50 GbE and 100 GbE not far behind.

What are the characteristics of smb/cifs? (select 2 answers)
Major storage network protocols include iSCSI, FC, FCoE, NFS, SMB/CIFS, HTTP and NVMe-oF.

Fibre Channel is a high-speed networking technology that delivers lossless, in-order, raw block data. The technology defines multiple communication layers for transporting SCSI commands and information units using the Fibre Channel Protocol (FCP). In addition to SCSI, Fibre Channel can also interoperate with IP and other protocols. It offers point-to-point, switched and loop interfaces and can deliver data rates up to 128 Gbps.

Fibre Channel was created to support SANs and address the shortcomings in SCSI and High-Performance Parallel Interface (HIPPI). It offers a reliable and scalable protocol and interface with high throughput and low latency, making it well suited for shared network storage. When used with optical fiber, Fibre Channel can support devices as far as 10 km apart. However, FC networks can be complex and require specialized equipment such as switches, adapters and ports.

The FCoE protocol enables Fibre Channel communications to run directly over Ethernet. The protocol encapsulates the FC frames in Ethernet frames, using a lossless Ethernet fabric and its own frame format. FCoE makes it possible for LAN and SAN traffic to share the same physical network but remain isolated from each other. It works with standard Ethernet cards, switches and cables, along with FCoE-enabled components. FCoE can support the same data rates as high-speed Ethernet.

With FCoE, an organization can use a single cabling method throughout the data center, helping to simplify management and reduce costs compared to regular Fiber Channel. FCoE also retains some of the latency and traffic management benefits of regular Fibre Channel, and it can use DCB to eliminate loss during queue overflow. However, FCoE will not work across routed networks such as Fibre Channel.

NFS is both a distributed file system and network protocol for accessing and sharing files between devices on the same LAN. The system and its protocol are commonly used to support NAS. NFS is a low-cost option for network file sharing that makes it possible for users and applications to access, store and update files on a remote computer, much like they would with DAS.

NFS uses the Remote Procedure Call (RPC) protocol to route requests between clients and servers. Although participating devices must support NFS, they don't need to understand the network's details. However, RPCs can be insecure, so NFS should be deployed only on trusted networks behind firewalls. The protocol is used primarily in Linux environments, although it is supported by Windows.

SMB is a client-server communication protocol that enables users and applications to access storage and other network resources on a remote server. Because it's a response-request protocol, it transmits multiple messages between the client and server to establish a connection. SMB operates at the application layer and can run on TCP/IP networks. Like NFS, the protocol is commonly used for NAS.

Since it was first released, numerous SMB dialects (implementations) have been released. One of the earliest was CIFS. Introduced by Microsoft, it was known as a chatty protocol that was buggy and prone to latency issues. Even so, it was embraced by OSes such as Windows, Linux and Unix. Subsequent SMB dialects have made CIFS all but obsolete. Even so, the terms SMB and CIFS are often used interchangeably or referred to as SMB/CIFS, although CIFS is only a single SMB implementation.

HTTP isn't typically thought of as a storage protocol, but it supports access to cloud storage services such as Amazon S3, Google Cloud Storage and Microsoft Azure, usually through RESTful APIs and standard HTTP/HTTPS requests. Amazon S3 has become the de facto standard for cloud object storage and is now supported by on-premises storage systems, including NAS, cementing HTTP's role as a storage protocol.

HTTP is a World Wide Web application protocol that runs on top of the TCP/IP. It provides a set of rules for transferring data between HTTP endpoints, which send requests and receive responses. The protocol is based on a client-server model and is widely supported and implemented. Most programming languages include HTTP-request capabilities, which makes it possible for almost any application to access storage using standards-based technologies.

Built on the NVMe specification, NVMe-oF is a high-speed storage protocol for accessing solid-state storage across network fabrics such Ethernet, Fibre Channel and InfiniBand. NVMe-oF defines a common architecture for interfacing with storage systems using NVMe message-based commands. The protocol can support many NVMe devices, while extending the distances between NVMe devices and their subsystems.

According to NVM Express Inc., 90% of the NVMe-oF protocol is the same as basic NVMe, which was designed for SSDs that connect directly to a computer through a Peripheral Component Interconnect Express bus. Like NVMe, NVMe-oF can take better advantage of a flash drive's inherent speeds, which are often limited by more traditional protocols and interfaces. Storage vendors offering all-flash arrays are quickly adopting NVMe-oF to support data-intensive workloads and high-performance computing. Many believe that NVMe-oF will eventually become the de facto protocol for enterprise storage.


Page 2

SAN zoning gains importance once your SAN includes more than a couple dozen devices. During the early days of SAN,...

there was debate about whether zoning was needed as a Fibre Channel SAN standard. Even now, zoning standards and implementation are still evolving. So, let's take a closer look.

SAN zoning is a fabric-based service for grouping the devices in a SAN into logical segments to control communications between those devices. When zoning is configured, only devices in the same zone can communicate with each other; cross-zone communication isn't permitted. However, devices can be members of multiple zones, offering greater flexibility in setting up SAN communications.

A SAN can include switches, storage arrays and LAN and WAN servers. A SAN makes it possible for an enterprise to share its storage devices, while isolating each device in its own subnetwork. The devices communicate directly with each other via high-speed storage media. By default, every server in the SAN can access every storage device in that SAN.

SAN storage is divided into logical units that are assigned LUNs. A unit can be based on anything from a drive's partition to an entire RAID set. Each port on a storage node is mapped to multiple LUNs. When zoning isn't enabled, every server in a SAN can mount all the LUNs it sees in the fabric. Zoning prevents this widespread mounting by controlling which devices can communicate with each other. It isolates the devices into logical groups or zones, while permitting each device to participate in multiple zones.

Zoning promotes efficient management, fabric stability and security. Smaller SAN environments can function without zoning, but this approach enables all devices to interact, which can affect performance even with smaller SANs.

When events in the fabric occur, the name server issues registered state change notifications (RSCNs). For example, the name server sends an RSCN when a new device logs in or an existing device leaves the fabric. When an RSCN is sent to every device for every event, the resulting traffic can affect performance, destabilize the fabric and lead to the loss of in-flight data.

If all devices are grouped in one large zone, each one is disrupted whenever a change occurs, such as adding or removing devices from the fabric. For this reason, organizations are discouraged from implementing a SAN without zoning.

Zones can be organized into zone sets. A zone set is a named group of one or more zones in the same SAN. A zone can be a member of multiple zone sets. Zone sets help enforce security across fabric-connected devices. They can also help when performing backups, maintenance or device testing without disrupting the operations of other devices. A SAN can support multiple zone sets, but only one can be active at a time.

What are the characteristics of smb/cifs? (select 2 answers)
SAN fabric with three overlapping zones

When a node attempts to connect to a fabric, it sends a logon request. When responding, the fabric assigns a 24-bit Fibre Channel Identifier (FCID) to the connecting port on the node. The FCID -- which comprises the domain ID, area ID and port ID -- is used to route frames through an FC network. The port also comes with a unique World Wide Name (WWN) identifier that was hard-coded into the device by the manufacturer. The manufacturer also assigned a unique node WWN to the device itself.

The next step occurs when a device logs on to register with the SAN's name server. The server hosts a database that maps the identifiers for each port. The database tracks the port's FCID, node WWN, port WWN and other information, including whether the device is an FC Protocol (FCP) device that uses SCSI commands.

After logging into the name server, a device also requests a list of other SAN devices that it can communicate with. This is where zoning kicks in. The name server returns only those devices that appear in the same zone -- in other words, only the devices that the first device is authorized to see.

From this list, the device will then typically log on to each of the listed devices to determine the type of FCP/SCSI device it is. This is similar to normal SCSI where the SCSI controller/server scans the bus and queries each device on the bus for its properties.

Once a device is added to a zone, it receives only those RSCNs related to that zone. In this way, the device is spared the type of RSCN storm that can affect a SAN when zoning is not used, and the network avoids the performance hit that comes when all RSCNs are transmitted to all devices.

Confusion sometimes surrounds zoning terminology, particularly with such concepts as hard zoning, soft zoning, port zoning and WWN zoning. Part of the confusion comes from the way in which zone members are identified and what that identification means in terms of how zoning is enforced.

Zone members are identified by their WWNs or by their port numbers. When port numbers are used, an identifier might be the FCID or a combination of the domain ID and port ID. If it's the latter, one of two formats is typically used: X/Y or X,Y, with X representing the domain ID and Y denoting the port number. Zoning that uses port numbers device identification is referred to as port zoning. Zoning that uses WWNs for device identification is referred to as WWN zoning.

Hard zoning is zoning that's enforced in the SAN hardware, where it blocks access by devices outside the zone. Soft zoning occurs when software invokes the filtering capabilities inherent in FC switching, thus preventing devices outside the zone from accessing the protected ports.

On the surface, hard zoning and soft zoning appear to achieve the same results, but consider the following example: Pat might not know Kelly's telephone number, but if Pat guesses the number correctly, Kelly's phone will ring. In the same way, soft zoning does nothing to prevent an unauthorized device from sending packets to a protected port in a zone, which can represent a serious security threat.

By comparison, hard zoning is similar to Kelly being able to block Pat's call altogether. Even if Pat guesses the phone number correctly, Kelly's phone will not ring, leaving no way for Pat to get through. Hard zoning works much the same way. It ensures that unauthorized devices can't connect to a protected port, providing more solid security.

Hard zoning and soft zoning can use either port number identifiers or WWN identifiers. At one time, SAN switches primarily used hard zoning with port numbers and soft zoning with WWNs. This led to the misconception that port zoning is synonymous with hard zoning and WWN zoning is synonymous with soft zoning, but they remain distinctly separate concepts. Hard and soft zoning refer to how zoning is enforced, and port and WWN zoning refer to how zone members are identified.

Most of today's SAN switches can use hard zoning with either port numbers or WWNs. In fact, it's often recommended to use hard zoning with WWNs.

WWN zoning uses WWNs to identify members in a zone, rather than using port numbers. A WWN is a globally unique identifier that is hardcoded into the device. In the past, WWN zoning was typically implemented through soft zoning, but now it's more often implemented through hard zoning. Most of today's SANs can support zones based on both node WWNs and port WWNs, although it's generally recommended to use port WWNs.

The advantage of WWN zoning is that the WWN follows the device, providing more flexibility than port zoning. If there's a change in the SAN topology, a switch's domain ID or where a device is plugged in, the zone is still good. For example, the SAN can be recabled without needing to reconfigure the zoning information. Even if the port numbers change, the zone members aren't affected.

Because WWNs are burnt into member devices, WWN zoning is generally more flexible than port zoning. However, these hardcoded identifiers can also present a challenge. If a host bus adapter (HBA) or storage interface fails, the zoning configuration must be adjusted to accommodate the new device and its WWNs.

Port zoning uses FCIDs or domain ID/port ID combinations to identify members in a zone. In this configuration, the zones are tied to the physical ports. Any devices connected to those ports can communicate with each other. Unlike WWNs, port-based identifiers aren't globally unique because different SAN fabrics can use the same domain and port IDs. Port zoning is typically implemented through hard zoning, although it can also be implemented through soft zoning.

The advantages and disadvantages of port zoning are essentially the opposite of WWN zoning. For example, if an HBA or storage interface fails, the storage configuration doesn't need to be adjusted to accommodate the new device because the same port identifier can be reassigned. However, if the SAN cabling or topology change, all the zones must be reconfigured.

There are no hard-and-fast rules when it comes to zoning. Most, if not all, FC switches support some form of zoning to control which devices on which ports can access other devices or ports. In general, IT teams should use WWN zoning for identifying members and hard zoning for enforcing zoning, but ultimately zoning should be configured based on business requirements.

Zoning can also be used in conjunction with other technologies to better control communication and security. For example, IT teams might take steps to control which devices an application can see on a server and whether the application is permitted to talk with other devices, or they might take advantage of an HBA's masking capabilities to control whether a server can interact with other devices.

IT teams might also use the server's OS to control which devices the server tries to mount as storage volumes, or they might add a software layer for clustering, volume management or file system sharing to control device access by applications. For storage, teams might use selective presentation to control which servers can access which LUNs and on which ports. In this way, access requests from unlisted devices are ignored or rejected.

The best advice is to use a blended approach. IT teams can control which devices and LUNs are mounted on the server by using the OS or other software capabilities, thus avoiding a mount-all approach. They can also use selective presentation on the storage array, along with zoning in the fabric. In addition, teams can use access control lists to control file-level access, as well as firewalls, security gateways and packet filtering. Each of these elements does a complementary and slightly different job in protecting data.

A virtual storage area network (VSAN) is a logical partition that enables traffic to be isolated in specific portions of a SAN. If a problem occurs in one VSAN, it can be handled with minimal disruption to the rest of the network. The VSAN enables devices in a SAN to communicate with each other regardless of where they're physically located in the fabric. Each VSAN has its own fabric services and management capabilities, just like a physical VSAN.

In fact, a VSAN is equivalent to a physical SAN in most ways, whereas a zone is merely a logical grouping of devices:

  • A VSAN provides an independent environment that supports such features as isolation, routing, naming and zoning capabilities, similar to a physical SAN. An individual zone doesn't provide this level of independence, nor does it support these types of features. All zones rely on the same management capabilities, which are all zone-specific.
  • A VSAN can contain zones, but a zone can't contain a VSAN. A zone is always limited to a single VSAN; it can't span multiple VSANs.
  • HBAs and storage devices can belong to only one VSAN, but they can participate in multiple zones.
  • VSAN members are identified by their VSAN or port IDs. Zone members use port IDs or WWNs.

Unlike zoning, VSANs make it possible to set up redundant environments to provide a backup in case problems arise in the primary environment. Redundant environments are needed because services such as the name server run as a single distributed service in a fabric. A badly behaved device could disrupt the name service to the extent that all devices on the fabric, not just those in the same zone, are affected.

The idea behind a VSAN is to afford a higher-level construct with a totally separate name server database rather than one common to all zones. It might even run as a totally separate service in the switch to minimize the risk of cross-contamination and to ensure that problems are more highly localized. Of course, a device connected to two separate VSANs can misbehave and bring down both environments, but VSAN redundancy still provides higher assurance against failure.

Organizations can set up zoning in their SAN fabrics in a variety of configurations, but many IT teams base their designs on the following zoning schemas.

Common host. Small and midsize environments tend to use a common host scheme. This method allocates one zone per OS, server manufacturer, HBA brand or similar configuration. This offers a fairly simple approach for environments using the same branded IT gear. It creates a zone consisting of all the common servers, plus the storage devices they must access.

Single target, multiple initiators. Organizations often start with the common host approach, but then want better granularity in their zoning, so they move to a zoning model in which each zone consists of one port on one storage array, along with all the devices permitted to access that port. This form of zoning also makes it visibly easier for a SAN administrator to track whether the array's OS support guidelines are being followed.

Single initiator, multiple targets. Increasingly common in heterogeneous SANs, this approach comes from a simple premise: SCSI initiators (servers) don't need to talk to other SCSI initiators. In this scenario, each zone consists of one server or HBA and all the storage devices the host is authorized to communicate with. This method avoids one server interfering with other servers.

Single initiator, single target. This is the ultimate in security, as zones are kept to their absolute usable minimum size to provide maximum security. This approach has been used successfully in a few cases but isn't as common as other approaches. Without good software, this configuration is difficult to set up and manage.

As with all things, the approach that organizations take depends as much on the available technology as on how they operate. One thing is for sure: They should choose a zoning strategy and then use it fully and effectively. SAN zoning isn't the answer to all storage problems, but it's a vital part of storage provisioning, even if it seems like overkill in small SANs. Once organizations get going in the right direction, it'll be easier to continue with an effective and reliable approach.

IT teams should approach zoning on a case-by-case basis, considering the supported workloads and type of equipment. Use WWNs for member identification unless specific circumstances require port-based identification. The same goes for using hard zoning rather than soft zoning. Also, use port WWNs rather than node WWNs to provide more granular control over member communications and to avoid multipathing issues.

IT teams should implement zoning even if they're using LUN masking, and they should consider zoning even for small SANs. The teams should also ensure that they cover the basics, such as denying access to the default zone, keeping each zone to a manageable size and deleting unused elements -- such as zones, members or aliases -- when the zone configuration changes. In general, organizations should use single-initiator zones rather than multi-initiator zones. When setting up zoning, admins should refer to the vendor's documentation for specific recommendations about best practices.