Voiced by Amazon Polly |
Introduction
- MicroCeph offers a straightforward way to deploy and manage a Ceph cluster.
- Designed to provide superior performance, reliability, and flexibility for object, block, and file-level storage, MicroCeph is a lightweight solution for deploying and managing Ceph. Ceph itself is a highly scalable, open-source distributed storage system.
- While Ceph itself can be complex to deploy and manage, MicroCeph aims to alleviate these challenges by providing a more user-friendly experience.
- MicroCeph simplifies key distribution, service placement, and disk administration, making deployment and operations quick and effortless. This is beneficial for clusters in private clouds, edge clouds, home labs, and even single workstations.
- Focused on delivering a modern deployment and management experience, MicroCeph caters to Ceph administrators and storage software developers.
- Microcephd leverages Dqlite to maintain a distributed SQLite store that tracks cluster nodes, disks used as OSDs (Object Storage Daemons), and configurations, including the placement of services like MONs (Monitors), MGRs (Managers), RGWs (RADOS Gateways), and MDSs (Metadata Servers).
- Microcephd supports all native Ceph protocols, including RBD (RADOS Block Device), CephFS (Ceph File System), and RGW, ensuring comprehensive compatibility and functionality. Additionally, it offers advanced features such as at-rest encryption for the disks used as OSDs, enhancing security.
- Designed to minimize setup and maintenance overhead, MicroCeph is delivered as a Snap package by Canonical, the company behind Ubuntu.
Customized Cloud Solutions to Drive your Business Success
- Cloud Migration
- Devops
- AIML & IoT
Use Cases for MicroCeph
- Development and Testing: Ideal for developers who need a reliable storage solution for testing purposes without the overhead of a full Ceph setup.
- Small Enterprises: Small businesses that require robust storage without significant investment in infrastructure.
- Edge Computing: Suitable for edge deployments where resources are limited, but reliable storage is essential.
Installation Steps
- Prepare Your Nodes: Ensure all nodes have the necessary dependencies installed and are networked together.
- Download MicroCeph: Obtain the latest version from the official repository or community site.
- Configure Your Cluster: Use simplified configuration tools provided by MicroCeph to set up your cluster.
- Deploy and Verify: Deploy the configuration and verify that the cluster is functioning correctly.
Installation on MicroCeph Server
Task 1: Install MicroK8s
Install the stable release of MicroCeph:
sudo snap install microk8s --channel=1.28/stable
Next, prevent the software from being auto-updated:
sudo snap refresh --hold microceph
If you do not set a channel at the install time, the snap install will default to the latest current stable version.
Task 2: Check the details
After microceph installation, you can utilize the built-in command to observe its status.
microceph status
microceph version
Task 3: Initialize the cluster
sudo microceph cluster –help
This displays the information that is helpful for managing Microceph clusters.
Begin by initializing the cluster with the cluster bootstrap command.
sudo microceph cluster bootstrap
View the cluster information
sudo microceph cluster list
Then, look at the status of the cluster with the status command:
sudo microceph status
To verify, list information about block devices (like hard drives) in the system.
lsblk
Add physical disks (/dev/nvmeXn1) to the Ceph storage cluster.
sudo microceph disk add /dev/nvme1n1
sudo microceph disk add /dev/nvme2n1
sudo microceph disk add /dev/nvme3n1
List the disks
sudo microceph disk list
View the overall status of the Ceph cluster
sudo ceph status
Then, look at the status of the cluster with the status command:
sudo microceph status
View detailed information about the health of the Ceph cluster.
sudo ceph health detail
Task 4: Working with OSD pool
View the current usage of the Ceph cluster’s storage
sudo ceph df
List all the pools in the Ceph cluster.
sudo ceph osd pool ls
We will create a new pool named rbd_pool with a size of 16 (which means the data will be replicated across 16 OSDs) and a minimum of 16 placement groups. Placement groups (PGs) are a fundamental concept in Ceph for data distribution and replication.
sudo ceph osd pool create rbd_pool 16 16
We can view the statistics about a specific pool in your Ceph cluster.
sudo ceph osd pool stats rbd_pool
To view detailed information.
sudo ceph osd pool ls detail
Let’s enable the rbd application for the rbd_pool pool. In MicroCeph, an “application” refers to a specific use case or type of data that will be stored in the pool. Here, rbd stands for Rados Block Device, which is used to provide block storage to clients.
sudo ceph osd pool application enable rbd_pool rbd
The above command initializes the rbd_pool pool for use with RBD (Rados Block Device). It sets up the necessary configuration and structures within the pool to support RBD.
sudo rbd pool init
rbd_pool
Create an RBD image named rbd_volume with a size of 4GB in the rbd_pool pool.
sudo rbd create --size 4G rbd_pool/rbd_volume
sudo rbd ls rbd_pool
Installation on Microceph Client
Task 1: Mapping RBD volume
Now, we will load the RBD (Rados Block Device) kernel module, which is necessary for interacting with RBD devices.
sudo modprobe rbd
List the loaded kernel modules and filter the output to display only the ones related to RBD, confirming that the RBD module has been successfully loaded.
lsmod | grep rbd
The command below updates the package lists for apt and installs the ceph-common package, which contains utilities and libraries commonly used to interact with Ceph clusters.
sudo apt-get update && sudo apt-get install ceph-common
Check the status of the rbdmap.service, which is responsible for mapping RBD images to block devices on the system.
sudo systemctl status rbdmap.service
List the contents of the /etc/ceph/ directory, where Ceph configuration files are typically stored.
ls /etc/ceph/
This keyring file contains authentication credentials for the Ceph admin user, which are necessary for administrative tasks.
sudo scp ubuntu@:/var/snap/microceph/975/conf/ceph.client.admin.keyring /etc/ceph/
Set the permissions of the ceph.client.admin.keyring file to 644, giving read, write, and execute permissions to all users.
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
Copy the Ceph configuration file. The Ceph configuration file contains settings and configurations for the Ceph cluster.
sudo scp ubuntu@:/var/snap/microceph/975/conf/ceph.conf /etc/ceph/
Set the permissions of the ceph.conf file to 644, providing read, write, and execute permissions to all users.
sudo chmod 644 /etc/ceph/ceph.conf
Map the RBD (Rados Block Device) volume named rbd_volume from the rbd_pool pool to a block device on the system. After executing this command, the RBD volume will be accessible as a block device.
sudo rbd map rbd_pool/rbd_volume
View information about all block devices on the system, including disks and partitions.
lsblk
Launch the fdisk utility for managing disk partitions on the block device /dev/rbd0.
sudo fdisk /dev/rbd0
Create a primary partition with the default values. View information about all block devices on the system, including disks and partitions.
lsblk
Format the first partition (/dev/rbd0p1) on the RBD block device with the XFS file system.
sudo mkfs.xfs /dev/rbd0p1
Create a directory named rbd_mount in the /tmp/ directory. This directory will be used as the mount point for mounting the RBD block device.
sudo mkdir /tmp/rbd_mount
Mount the first partition (/dev/rbd0p1) of the RBD block device to the /tmp/rbd_mount/ directory. Once mounted, the RBD volume will be accessible as a file system under /tmp/rbd_mount/
sudo mount /dev/rbd0p1 /tmp/rbd_mount/
View information about all block devices on the system, including disks and partitions.
lsblk
Get your new hires billable within 1-60 days. Experience our Capability Development Framework today.
- Cloud Training
- Customized Training
- Experiential Learning
About CloudThat
CloudThat is a leading provider of Cloud Training and Consulting services with a global presence in India, the USA, Asia, Europe, and Africa. Specializing in AWS, Microsoft Azure, GCP, VMware, Databricks, and more, the company serves mid-market and enterprise clients, offering comprehensive expertise in Cloud Migration, Data Platforms, DevOps, IoT, AI/ML, and more.
CloudThat is the first Indian Company to win the prestigious Microsoft Partner 2024 Award and is recognized as a top-tier partner with AWS and Microsoft, including the prestigious ‘Think Big’ partner award from AWS and the Microsoft Superstars FY 2023 award in Asia & India. Having trained 850k+ professionals in 600+ cloud certifications and completed 500+ consulting projects globally, CloudThat is an official AWS Advanced Consulting Partner, Microsoft Gold Partner, AWS Training Partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, AWS GenAI Competency Partner, Amazon QuickSight Service Delivery Partner, Amazon EKS Service Delivery Partner, AWS Microsoft Workload Partners, Amazon EC2 Service Delivery Partner, Amazon ECS Service Delivery Partner, AWS Glue Service Delivery Partner, Amazon Redshift Service Delivery Partner, AWS Control Tower Service Delivery Partner, AWS WAF Service Delivery Partner, Amazon CloudFront Service Delivery Partner, Amazon OpenSearch Service Delivery Partner, AWS DMS Service Delivery Partner, AWS Systems Manager Service Delivery Partner, Amazon RDS Service Delivery Partner, AWS CloudFormation Service Delivery Partner, AWS Config, Amazon EMR and many more.

WRITTEN BY Sirin Kausar Isak Ali
Comments