NFV- Storage Concepts

NVF is consist of Compute, Storage and Networking functions. In this post, we will discuss some definitions/ Concepts related to Storage function of NFV.

  • Block storage: Block storage is a type of data storage typically used in SAN environments where data is stored in volumes, also referred to as blocks. Each block acts as an individual hard drive and can be used to store files or work as storage for special applications such as databases or file systems. The most common examples of block storage are SANs (IP and FC SANs), iSCSI disks, and local disks.
  • Object Storage: Object storage is a computer data storage architecture that manages data as objects, as opposed to file systems that manage data as a file hierarchy. Each object has a globally unique identifier and the storage system can find the object based on this identifier.
  • File Storage: File storage stores data in a hierarchical structure. For end users, file storage is just like a shared folder where they can save data. Shared Windows folders use the file storage architecture.
  • Shared storage: Shared storage is a medium, accessible by all hosts or VMs in a SAN environment. Users do not need to duplicate files to their own computers. Generally, block storage provides space for shared files. File storage can also provide the space if it has sufficient resources.
  • NAS Storage: Network-attached storage (NAS) is a type of dedicated file storage device that provides local-area network (LAN) nodes with file-based shared storage through a standard Ethernet connection. NAS uses TCP/IP, ATM, and FDDI to connect storage devices, switches, and clients, and all these components form a private storage network.
  • Disk Domain: A disk domain can include the same or different types of disks. Disk domains are isolated from each other. Therefore, services carried by different disk domains do not affect each other in terms of performance or faults (if any). A disk domain must be created before you create a storage pool. Plan separate disk domains for management and service VNFs. If a DMZ is planned, disk domains for service NEs can be further divided into trusted disk domains and DMZ disk domains. The following is an example of disk domains planned for a DC:
    • ManagerDomain01: used to deploy VMs for  management VNFs
    • ServiceDomain01: used to deploy VMs for service VNFs in the trusted zone
  • Storage Media: The Common storage media include solid state drives (SSDs), serial attached SCSI (SAS) disks, and nearline SAS (NL-SAS) disks. Each type of disk has its unique advantages and disadvantages, mainly in performance and cost. Storage tiers are determined by disk performance and cost. Therefore, you must be knowing each type of disks before configuring storage services.
    • SSD: SSDs have no rotational latency but higher IOPS compared with hard disk drives (HDDs) and also have a clear advantage over HDDs in applications that are sensitive to delays or have high I/O. As for bandwidth-intensive applications, SSDs improve their performance. SSD performance is mainly determined by the NAND flash memory it uses. There are three primary types of NAND memory:
      • single level cell (SLC),
      • multi-level cell (MLC), and
      • enterprise-grade MLC (eMLC).
        • SLC has a performance advantage over eMLC, and eMLC provides better performance than MLC. In tiered storage, SSDs are introduced to the high-performance tier.
    • SAS Disks: SAS disks are of average performance, capacity, and reliability but are cost-effective compared to SSDs. SAS data is stored on magnetic disks that generally rotate at 10K RPM or 15K RPM. In tiered storage, SAS disks are introduced to the performance tier.
    • NL-SAS Disks: Hard disk drive performance is mainly determined by its rotational speed. A higher rotational speed means better performance. NL-SAS disks generally rotate at lower speeds (about 7.2 K RPM) than SAS disks. Though with poor performance, NL-SASs provide the largest capacity among SASs and SSDs, and are thereby introduced to the capacity tier. Additionally, NL-SASs are energy efficient, when compared to SASs, NL-SASs use about 96% less power for each TB of storage. Statistics show that 60% to 80% of data from most applications is rarely accessed. This data could be stored on NL-SASs.
  • Storage Tier:  A storage tier is a collection of storage media with the same performance. Disks are segregated into the following three storage tiers based on their performance levels:
      • High-Performance Tier:Storage Media- SSD,  Most frequently accessed data
      • Performance Tier: Storage Media- SAS, Moderately accessed data
      • Capacity Tier:  Storage Media- NL-SAS, Rarely accessed data
  • Storage Pool/Storage Resource Pool: A storage pool is a container of storage resources, in which multiple file systems can be created. Storage resources required by application servers all come from storage pools. One storage pool corresponds to one disk domain, for example, ServiceDomain01 corresponds to ServicePool01.
  • Backend Storage: Backend storage, also known as disk array storage, provides storage resources for  OpenStack. An administrator needs to configure backend storage and associate it with a storage device after a storage pool is created. If multiple storage pools use the same storage device and map to the same backend storage, separate the storage pool names using commas when you specify
  • Storage Pool Name on the  OpenStack web client. If the storage pools map to different backend storage, use number signs (#) to separate storage pool names and backend storage names. For example, if backend storage ipsan1 uses poolA and poolBipsan2 uses poolC, and ipsan3 uses poolD , set Storage Pool Name to poolA,poolB#poolC#poolD and Backend Name to ipsan1#ipsan2#ipsan3.
  • Disk Types/Volume Types :Disk Types is a parameter that can be configured on the OpenStack OM web client. It specifies the types of disks provided by backend storage. An administrator can create suitable disks for a VM by specifying the Disk Types the VM can use. The parameter must be the same as storagePoolRef , which is defined in the VNFD file. Volume Type and Disk Types are almost the same. The only difference is that Volume Type is a parameter for OpenStack, while Disk Types is a parameter for FusionSphere OpenStack OM. Both indicate the types of disks provided by backend storage and have the same mapping relationship with backend storage.
  • Multiple Disk Arrays: Multiple disk arrays can be deployed if customers require that active/standby or load-sharing VMs use different disk arrays. This deployment requires the configuration of two volume types to map different disk arrays.
  • SAN Storage: A storage-area network (SAN) is used to move data between servers and the various storage devices those servers use. In SAN, storage devices are connected to application servers over FC switches. SAN is used as backend storage to provide storage resources for EMS, VNFM, and VNF VMs.
  • LUN: It is a logical unit number (LUN) identifies a logical unit, which is a device addressed by the Small Computer Systems Interface (SCSI) protocol or SAN protocols which encapsulate SCSI, such as Fiber Channel (FC) or Internet Small Computer Systems Interface (iSCSI). LUNs for VMs are defined in the VNFD file. The LUN creation process is as follows:
      • An administrator creates a volume and maps it to a backend storage name when creating virtual disks for a VM.
      • Cinder on OpenStack uses the backend storage name to find the corresponding storage pool on the disk array and creates a LUN on the storage pool.
      • After the LUN is created, Cinder asks Nova to attach this LUN to the VM.
  • Role: One or multiple projects form a service, and multiple services that provide the same or similar functions are deployed together to form a role. For example, the role measure,which provides measuring and monitoring and is usually deployed on a controller node, is composed of the Ceilometer and Tangram            services.
  • MongoDB: MongoDB is a cross-platform, document-oriented database. It is classified as a NoSQL database. OpenStack uses MongoDB as the storage backend of the metering service Ceilometer, because MongoDB provides larger data storage, higher I/O, and weaker correlation than SQL (Relation) databases.
    • By default, MongoDB uses the local hard disks on blade servers to store data.
    • If the amount of data is large enough, it will greatly consume IOPS resources, which may affect other FusionSphere OpenStack functions. As a result, it is recommended that you deploy MongoDB on a disk array to reduce the IOPS consumption of local disks.
    • Hard disks used by MongoDB must belong to the same disk array and be allocated to a dedicated disk domain.
  • RAID: Redundant array of independent disks (RAID) is a data storage virtualization technology that combines multiple physical disk drive components into a single logical unit for the purposes of data redundancy, performance improvement, or both. Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word RAID followed by a number, for example RAID 0 or RAID 1. Each schema, or RAID level, provides a different balance among reliability, availability, performance, and capacity.
  • LVM and VG: Logical volume management (LVM) is a technology that manages logical volumes for the Linux kernel. OpenStack Cinder also uses this technology. To understand LVM, you must have the following concepts in mind:
      • Physical volumes (PVs): correspond to hard disks, hard disk partitions, or LUNs of an external storage device.
      • Volume group (VG): is a collection of physical volumes. Users can add or remove volumes based on their needs. On FusionSphere OpenStack, the volume group cpsVG is created, which functions just like the local disk (C:) on Windows. After FusionSphere OpenStack is installed, hard disks on blade servers will be added to cpsVG.
      • Logical volume (LV): The equivalent of a disk partition in a non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system.
      • Physical extents (PEs): are a sequence of chunks that compose a PV.
    • To put it simply, the LVM works by chunking PVs into PEs. The PEs are mapped onto logical extents (LEs) which are then pooled into VGs. These VGs are linked together into LVs that act as virtual disk partitions and that can be managed as such by using LVM.
  • Storage Multipathing: The multipathing is the establishment of multiple physical routes between a server and the storage device that supports it. In storage networking, the physical path between a server and the storage device that supports it can fail sometimes. If there is only one physical path between the two devices, a single point of failure (SPOF) may occur. To avoid a SPOF, multipathing can be used to improve SAN reliability and availability and fortify system performance.

Related Posts: