Category
Storage
1. Introduction
Oracle Cloud File Storage (often referred to in Oracle Cloud Infrastructure (OCI) documentation as the File Storage service) is a managed, shared file system designed for applications and users that need POSIX-like file semantics and network file access. It provides a centralized place to store files that can be mounted concurrently by multiple compute instances and services over a private network.
In simple terms: File Storage gives you an NFS-accessible shared folder in Oracle Cloud. You create a file system, attach it to your virtual network through a mount target, export a path, and then mount it from Linux hosts (and other supported clients) as a regular directory.
Technically, File Storage is a cloud-managed network file system. You provision a file system and expose it through a mount target that lives in your VCN subnet. Clients in your VCN (or connected networks) mount the export path using NFS. File Storage handles durability and availability of the storage backend, while you control access through IAM policies, export rules, and network security controls.
The problem it solves: many workloads need shared, hierarchical storage—for example, web server fleets sharing static content, CI/CD runners sharing build artifacts, analytics tools reading common datasets, and lift-and-shift enterprise apps expecting shared NFS. File Storage provides this without you operating NFS servers, RAID, patching, or capacity planning at the instance level.
Naming status: As of the latest generally available OCI terminology, “File Storage” / “File Storage service” is current and active. If you see older references to “File Storage Service (FSS)”, that is typically the same service. Always verify the latest feature set and limits in the official documentation for your region.
2. What is File Storage?
Official purpose (scope): Oracle Cloud File Storage is a managed service for creating and accessing shared file systems over a network from resources in OCI. It is designed for workloads that require file-level access (directories, permissions, file locking semantics as supported by NFS), rather than object-level access (Object Storage) or block-level access (Block Volumes).
Core capabilities
- Create a file system in OCI.
- Create a mount target in a VCN subnet to provide a private endpoint.
- Create an export (export path) associated with the mount target.
- Mount the exported file system from clients over NFS.
- Manage access using a combination of:
- OCI IAM policies (who can create/modify file storage resources)
- Network controls (VCN security lists / NSGs)
- Export rules / export options (which client source(s) can mount and with what permissions)
Major components
- File system: The managed storage resource that holds directories and files.
- Mount target: A network endpoint (private IP in your subnet) that clients use to reach the file system.
- Export set: A collection associated with a mount target that contains exports (paths).
- Export: The exported path (for example
/shared) mapped to a file system, with export options/rules. - Snapshots / clones / replication (if available in your tenancy/region): File-system data management features may exist depending on current OCI capabilities—verify in official docs for what is supported in your region and tenancy.
Service type
- Managed Storage (file), accessed over the network (NFS-based).
- You manage access and usage; OCI manages the storage infrastructure.
Scope: regional vs. availability domain
OCI services vary between regional and availability-domain (AD) scoped resources. File Storage resources are commonly modeled as availability-domain–scoped (for example, mount targets are created in a subnet in an AD). Exact scoping and HA behavior can differ by region and current implementation details—verify in official docs: – Which resources are AD-specific – How availability is handled within a region – Whether and how you should deploy mount targets across multiple ADs for resilience
How it fits into the Oracle Cloud ecosystem
File Storage is part of OCI Storage and is frequently used with: – Compute (VMs/BMs) that mount the file system – VCN networking (subnets, routing, security lists/NSGs) – Bastion for secure administrative access without exposing SSH – IAM for least-privilege control over who can create/update/delete storage resources – Monitoring and Audit for operations and governance – Optional connectivity services: FastConnect, Site-to-Site VPN, Remote Peering for hybrid and multi-region access patterns
3. Why use File Storage?
Business reasons
- Faster delivery: Teams get shared storage without building and operating NFS clusters.
- Lower operational overhead: OCI manages storage durability and service-level operations.
- Supports legacy and enterprise apps: Many enterprise applications expect a shared filesystem.
Technical reasons
- Shared POSIX-style storage: A common directory tree across multiple clients.
- Multi-host access: Multiple compute instances can mount the same export concurrently.
- Fits lift-and-shift: Helps migrate on-prem apps that depend on NFS shares.
Operational reasons
- Simplified administration: Provision via Console, CLI, SDK, or Terraform.
- Centralized storage management: One place for shared app data, configs, and artifacts.
- Integrates with OCI governance: compartments, tags, audit trails, and policies.
Security/compliance reasons
- Private network access: Mount targets are typically private IPs within your VCN.
- Layered access controls:
- Network controls (NSGs/security lists)
- Export rules restricting client sources
- IAM policies controlling administrative actions
- Auditability: OCI Audit can record API operations on file storage resources.
Scalability/performance reasons
- Elastic storage: You typically pay for what you store rather than pre-provisioning a volume size (verify exact model in pricing docs).
- Designed for concurrent access: Suitable for shared-content and shared-workspace patterns.
When teams should choose it
Choose File Storage when you need: – NFS-like shared storage – Concurrent access by multiple instances – Directory/file semantics and shared paths – Lift-and-shift of NFS-based applications
When teams should not choose it
Avoid File Storage when you need: – Object APIs (use Object Storage for S3-like semantics, lifecycle tiers, and massive scale for unstructured data) – Ultra-low-latency local IO (use local NVMe where appropriate) – Single-host block storage with database-grade tuning (use Block Volumes) – Built-in cross-region active-active global namespace (verify if/when replication exists; do not assume it replaces application-level DR design)
4. Where is File Storage used?
Industries
- Media and entertainment (shared assets, render outputs)
- Healthcare and life sciences (shared datasets with access controls)
- Financial services (enterprise apps with shared config and batch I/O)
- SaaS providers (shared configuration, artifacts, multi-VM web tiers)
- Education/research (shared lab data, home directories)
Team types
- Platform engineering (shared runtime assets)
- DevOps/SRE (shared build artifacts, deployment assets)
- Data engineering (shared staging areas)
- Security/Compliance (controlled shared repositories)
- App teams modernizing legacy systems
Workloads
- Web farms sharing static content
- CI/CD runners storing build/test artifacts
- Shared home directories for Linux users
- Content management systems
- Batch processing pipelines with shared staging folders
- Lift-and-shift enterprise apps expecting NFS
Architectures
- Multi-tier apps where application servers share a common filesystem
- Compute clusters (VM/BM) mounting shared exports
- Hybrid setups where on-prem clients access shared cloud storage via VPN/FastConnect (with careful latency and security planning)
Production vs dev/test usage
- Dev/test: quick shared workspace for builds, artifacts, integration testing.
- Production: shared app content, shared configs, data staging, and controlled collaboration areas—designed with resilient networking, strict export rules, and monitoring.
5. Top Use Cases and Scenarios
Below are realistic scenarios where Oracle Cloud File Storage is a strong fit.
1) Web farm shared static content
- Problem: Multiple web servers must serve identical static assets.
- Why File Storage fits: One shared directory mounted by all web servers.
- Example: A pool of OCI Compute instances behind a load balancer mounts
/www-assetsand serves the same images/CSS/JS.
2) CI/CD artifact repository (short-lived)
- Problem: Build agents need a shared location for intermediate artifacts.
- Why it fits: Simple POSIX paths; easy cleanup; concurrency.
- Example: Jenkins agents mount
/ci-artifactsto store test reports and build outputs before publishing to Object Storage.
3) Lift-and-shift NFS-dependent enterprise app
- Problem: An on-prem app expects an NFS share for configs/uploads.
- Why it fits: NFS-style mount paths with familiar semantics.
- Example: Migrate the app VMs to OCI and mount
/app-shareinstead of running an NFS server VM.
4) Shared home directories for Linux users
- Problem: Engineers need consistent home directories across multiple hosts.
- Why it fits: Centralized filesystem with permissions.
- Example: Bastion/admin hosts mount
/homefrom File Storage so user profiles are consistent.
5) Data science “working set” staging area
- Problem: Analysts need a shared folder for notebooks, small datasets, and outputs.
- Why it fits: Easy collaboration and shared access patterns.
- Example: A small team mounts
/ds-sharedon compute instances running training jobs.
6) Media workflow scratch space (shared)
- Problem: Render nodes need shared access to input assets and output frames.
- Why it fits: Shared files with directory structure, accessed by many clients.
- Example: A render farm mounts
/projectAand reads textures while writing outputs to per-shot directories.
7) Container workloads needing shared persistent storage
- Problem: Some containerized apps need ReadWriteMany-like storage semantics.
- Why it fits: NFS mounts can provide shared access for multiple nodes (integration specifics depend on your orchestration stack; verify your CSI/driver approach).
- Example: A Kubernetes deployment mounts a shared export for shared uploads.
8) Shared configuration and certificates distribution (carefully)
- Problem: A fleet needs consistent config bundles.
- Why it fits: Centralized files accessible to multiple nodes.
- Example: A controlled
/configexport for non-secret configs (secrets should use a secrets manager; avoid storing secrets in plain text).
9) Batch processing pipeline staging area
- Problem: ETL jobs exchange files between stages.
- Why it fits: Shared directory structure and atomic rename/move patterns.
- Example: Stage1 writes to
/incoming, Stage2 processes and writes to/processed, Stage3 archives to Object Storage.
10) Application uploads shared across app servers
- Problem: User-uploaded images must be visible to all app instances.
- Why it fits: Shared filesystem path simplifies application code.
- Example:
/uploadsmounted on each app server; application writes uploads once and all nodes can serve them.
11) Legacy tools requiring filesystem paths
- Problem: Some tools can’t use object APIs.
- Why it fits: Presents as a normal filesystem path.
- Example: A reporting tool reads
/reports/input/*.csvnightly.
12) Hybrid shared storage extension (with network planning)
- Problem: On-prem and cloud hosts must share data during migration.
- Why it fits: With VPN/FastConnect and strict export rules, you can provide controlled shared access (latency-sensitive).
- Example: On-prem batch server mounts OCI File Storage during a phased migration.
6. Core Features
Note: OCI services evolve. Confirm the current feature availability for your region/tenancy in official docs.
Managed NFS-accessible shared file systems
- What it does: Provides a shared filesystem accessible over the network.
- Why it matters: Many workloads need shared POSIX-like file access.
- Practical benefit: No need to operate NFS servers, disks, or clustering.
- Caveat: NFS protocol version(s) supported can vary; verify supported NFS versions and client OS support in OCI docs.
Mount targets in your VCN (private endpoints)
- What it does: Exposes the file system via a mount target with private IP(s) in your subnet.
- Why it matters: Keeps file access within your private network boundaries.
- Practical benefit: Works with standard VCN routing, security lists, and NSGs.
- Caveat: Mount target placement affects connectivity; ensure route tables and security rules allow NFS traffic.
Export paths and export options (access rules)
- What it does: Lets you define exported paths (for example
/shared) and specify which clients can mount and with what permissions. - Why it matters: Shared storage must be carefully access-controlled.
- Practical benefit: Restrict mounts to specific CIDRs/subnets and enforce read-only where needed.
- Caveat: Misconfigured export options are a common cause of “permission denied” or unexpected access.
Integration with OCI IAM (administrative control plane)
- What it does: Uses OCI IAM policies to control who can create/update/delete file systems, mount targets, and exports.
- Why it matters: Separation of duties and least privilege.
- Practical benefit: Storage administrators can be compartment-scoped; app teams can be constrained.
- Caveat: IAM controls API actions, not runtime NFS file permissions—use POSIX permissions and export rules as well.
Encryption at rest (service-managed)
- What it does: Encrypts stored data at rest using OCI’s standard encryption mechanisms.
- Why it matters: Reduces risk if underlying media is compromised.
- Practical benefit: You typically get encryption without changing application code.
- Caveat: If customer-managed keys (CMEK) are supported for File Storage in your region, that will be documented—verify.
Snapshots / clones (data management)
- What it does: Provides point-in-time copies (snapshots) and/or rapid duplication (clones), depending on service capabilities.
- Why it matters: Safer upgrades, quick rollback, test environments from production-like data.
- Practical benefit: Restore quickly from logical mistakes.
- Caveat: Snapshots and clones may contribute to billed storage; confirm billing and retention behavior.
Metrics and monitoring integration
- What it does: Exposes performance and utilization metrics to OCI Monitoring.
- Why it matters: Shared storage issues can impact many workloads simultaneously.
- Practical benefit: Alert on throughput/latency/utilization symptoms (exact metrics vary; verify metric names).
- Caveat: Monitoring helps detect symptoms but you still need workload-level tuning and access controls.
High availability design (service-side) and multi-endpoint patterns (customer-side)
- What it does: The managed service is built for durability and availability; you can design client connectivity with multiple mount targets and redundancy patterns.
- Why it matters: File shares often become critical dependencies.
- Practical benefit: Reduced operational burden compared to self-managed NFS HA.
- Caveat: The recommended HA approach (single vs multiple mount targets, multi-AD, etc.) is implementation-specific—verify OCI’s current best practices.
7. Architecture and How It Works
High-level architecture
- You create a file system in a compartment.
- You create a mount target in a VCN subnet (private IP).
- You create an export mapping an export path (like
/shared) to your file system, with export options. - Client instances in the VCN mount the share via NFS using the mount target IP and export path.
- Access is governed by: – IAM for control plane operations – VCN networking security for transport-level reachability – Export rules and OS-level file permissions for runtime access
Data flow vs control flow
- Control plane (API): Console/CLI/SDK calls create and manage file systems, mount targets, and exports. These actions are governed by OCI IAM and recorded in OCI Audit.
- Data plane (NFS traffic): NFS read/write operations flow from client instances to the mount target private IP over your VCN (and connected networks). Network security rules and export options determine allowed sources.
Integrations with related services
- OCI Compute: primary client for mounting NFS shares.
- VCN: required for mount target placement and connectivity.
- Bastion: secure administrative access to instances mounting the filesystem.
- FastConnect / IPSec VPN: for hybrid access patterns (latency-sensitive).
- Monitoring: for metrics and alarms.
- Audit: governance for API activity.
- Terraform / Resource Manager: infrastructure as code for repeatability.
Dependency services
- VCN, subnet(s), route tables, and security lists/NSGs
- IAM policies for the compartment
- Compute instances (or other clients) that mount the filesystem
Security/authentication model
- API authentication: OCI IAM (users, groups, dynamic groups, instance principals).
- NFS access control: Export rules/options + network security + POSIX permissions on the mounted filesystem.
- Network exposure: Mount targets are typically private; avoid exposing NFS to the public internet.
Networking model
- Mount target resides in a subnet (private IP).
- Clients must have network routes to reach that subnet.
- Security rules must permit the required NFS traffic between client and mount target.
- The exact ports depend on NFS protocol/version and OCI implementation—verify in docs. Many deployments primarily require TCP 2049, but do not assume without confirmation.
Monitoring/logging/governance
- Use Monitoring for service metrics and Alarms for thresholds.
- Use Audit to track create/update/delete operations.
- Use Tags to map cost and ownership.
- Use compartment structure to isolate environments (dev/test/prod).
Simple architecture diagram
flowchart LR
U[Admin: Console/CLI] -->|Create| FS[(File System)]
U --> MT[Mount Target<br/>Private IP in Subnet]
U --> EX[Export<br/>Path /shared]
C1[Compute Instance 1] -->|NFS mount| MT
C2[Compute Instance 2] -->|NFS mount| MT
MT --> EX --> FS
Production-style architecture diagram
flowchart TB
subgraph OCI[Oracle Cloud - Region]
subgraph Net[VCN]
subgraph AD1[Availability Domain 1]
APP1[App VM Pool A] --> MT1[Mount Target A<br/>Subnet A]
end
subgraph AD2[Availability Domain 2]
APP2[App VM Pool B] --> MT2[Mount Target B<br/>Subnet B]
end
MT1 --> FS[(File Storage File System)]
MT2 --> FS
end
MON[OCI Monitoring/Alarms]
AUD[OCI Audit]
end
MON --- FS
AUD --- FS
The multi-mount-target pattern is a common production consideration for resilience and locality, but the exact best practice for File Storage across availability domains should be validated in OCI documentation for your region.
8. Prerequisites
Before you start, ensure you have the following.
Tenancy/account requirements
- An active Oracle Cloud tenancy with permissions to use OCI Storage and Networking.
- A compartment to hold resources (recommended: separate compartments for dev/test/prod).
Permissions / IAM policies
You need IAM permission to manage: – File Storage resources (file systems, mount targets, exports) – VCN resources (subnets, security lists/NSGs, route tables) – Compute instances (to mount and validate)
Example policy patterns (conceptual—adapt to your compartment structure and security model):
- Allow a group to manage file storage in a compartment:
- Verify exact policy verbs/resource types in official docs (OCI policy syntax is strict and service families have specific names).
In OCI, the family is commonly referred to as file-family for File Storage-related permissions, but verify the correct policy resource types here:
– https://docs.oracle.com/en-us/iaas/Content/Identity/Reference/policyreference.htm
Billing requirements
- File Storage is a paid service with usage-based billing.
- You should have a payment method set up (or credits) and understand the pricing dimensions (see Pricing section).
Tools
Choose one workflow: – OCI Console (browser-based; simplest for beginners) – OCI CLI (for scripting) – Terraform (recommended for production/IaC)
For this tutorial: – Console + SSH to a Linux compute instance is enough.
Region availability
- File Storage is available in many OCI regions, but not necessarily all features in all regions.
- Confirm region support and limits in official docs and your tenancy’s Service Limits page.
Quotas/limits
- Limits exist for number of file systems, mount targets, exports, and possibly throughput or other constraints.
- Check:
- OCI Console → Governance/Administration → Limits, Quotas and Usage (exact navigation may vary)
- Official “Service Limits” docs for File Storage (verify in official docs)
Prerequisite services
- VCN with at least one subnet for the mount target
- A Linux compute instance in a subnet that can reach the mount target
- Appropriate security rules allowing NFS traffic
9. Pricing / Cost
Oracle Cloud File Storage pricing is usage-based. Exact prices vary by region/currency and can change; do not hardcode numbers in internal plans without checking the official pricing pages.
Official pricing references
- OCI pricing landing page: https://www.oracle.com/cloud/price-list/
- OCI cost tools (pricing calculator): https://www.oracle.com/cloud/costestimator.html
- File Storage-specific pricing page (verify current URL and region selector):
https://www.oracle.com/cloud/storage/file-storage/pricing/ (Verify in official Oracle pricing pages)
Pricing dimensions (typical model)
Common billing drivers for managed file storage services include: – Storage capacity (GB-month): primary driver—how much data is stored on the file system. – Snapshots/clones storage: if snapshots/clones exist, retained snapshot data may count toward billed usage (verify exact accounting). – Data transfer (network egress): – Intra-VCN traffic is often not billed as internet egress, but cross-region, internet egress, and some interconnect patterns may incur charges. – Verify OCI data transfer pricing and whether traffic to/from mount targets affects billable egress for your architecture.
OCI’s exact billing metrics for File Storage (including any performance tiers or operation-based charges) must be confirmed in current pricing docs.
Free tier
OCI has an Always Free tier for some services. File Storage may or may not be included in Always Free in your region/tenancy. – Verify Always Free eligibility here: https://www.oracle.com/cloud/free/
Cost drivers (what usually increases spend)
- Storing large datasets in File Storage instead of Object Storage.
- Keeping many snapshots/clones long-term (if billed).
- Cross-region replication/DR copies (if used; verify feature availability and cost).
- Data egress to the internet or cross-region transfers.
- Over-provisioned compute instances dedicated solely to file sharing (if you could use managed File Storage instead—here File Storage saves that compute cost).
Hidden/indirect costs
- Compute: instances used to process or serve files; autoscaling can multiply read/write load.
- Backup/DR: additional copies in other regions or services.
- Connectivity: FastConnect/VPN costs if hybrid access is required.
- Operations: monitoring, logging retention, and security tooling.
Network/data transfer implications
- Prefer private access over the VCN and avoid public exposure.
- Keep client instances and mount targets in the same region; cross-region access can introduce latency and possible egress charges.
How to optimize cost
- Use File Storage for active/shared file workloads; move cold archives to Object Storage with lifecycle policies.
- Remove obsolete data and implement retention policies.
- Review snapshot/clone retention (if used).
- Avoid using File Storage as a “data lake” when object storage is more cost-efficient.
- Use tagging for cost allocation:
env,app,owner,cost-center.
Example low-cost starter estimate (conceptual)
A small lab might include: – 1 small file system with a few GB to tens of GB of data – 1 mount target – 1 compute VM (the VM will likely dominate the cost if the file system remains small)
Because prices vary, compute a range using the cost estimator: – Estimate storage as (average stored GB) × (GB-month rate). – Add compute for the VM hours. – Assume minimal egress if everything stays inside the VCN.
Example production cost considerations
For production, plan for: – Growth of stored data (GB-month over time) – Snapshot/clone retention (if applicable) – DR strategy (additional storage copies) – Hybrid network connectivity costs – Monitoring/observability retention
10. Step-by-Step Hands-On Tutorial
This lab creates a real Oracle Cloud File Storage setup and mounts it from a Linux compute instance over a private network.
Objective
Provision Oracle Cloud File Storage and mount it on an OCI Compute Linux VM using NFS, then validate read/write access and clean up resources.
Lab Overview
You will: 1. Create (or reuse) a VCN and subnets. 2. Launch a Linux compute instance and connect via SSH. 3. Create a File Storage file system. 4. Create a mount target in a subnet. 5. Create an export path. 6. Mount the export on the VM and validate I/O. 7. Clean up all resources.
Expected time: 45–90 minutes
Cost: Low, but not free (compute + storage). Keep the file system small and delete resources after the lab.
Step 1: Create or choose a compartment
- In the OCI Console, create a compartment such as:
–
lab-storage-file-storage - Record the compartment OCID (optional) for CLI/Terraform later.
Expected outcome: A dedicated compartment to isolate and clean up lab resources easily.
Step 2: Create networking (VCN + subnets)
You need: – A subnet for the compute instance – A subnet for the mount target (can be the same subnet in small labs, but separating is common in production)
Console (typical approach):
1. Go to Networking → Virtual Cloud Networks
2. Click Create VCN
3. Choose VCN with Internet Connectivity for a beginner lab (creates VCN, subnets, IGW, route tables).
4. Name it vcn-fss-lab.
Important security note: The mount target should typically be in a private subnet in production. For a lab, you can still use a private subnet and SSH through a bastion, but that adds steps. This tutorial keeps it simple while still recommending private-by-default for the mount target.
Recommended lab layout: – Public subnet: compute instance (SSH allowed from your IP) – Private subnet: mount target
Expected outcome: A VCN with at least two subnets and routing in place.
Step 3: Configure security rules (NFS + SSH)
You must allow: – SSH to the compute VM (from your public IP) – NFS traffic from the compute subnet to the mount target subnet
Option A (recommended): use Network Security Groups (NSGs)
– Create NSG for compute and NSG for mount target, then allow rules between them.
Option B: use Security Lists (simpler for labs)
Update the mount target subnet’s security list to allow NFS from the compute subnet CIDR.
Because exact NFS ports can depend on the protocol version and implementation, use OCI docs to confirm the required ports. Commonly: – TCP 2049 (NFS)
If you aren’t sure, check the official File Storage “Mounting File Systems” documentation for required ports:
– File Storage docs landing page (navigate to mounting):
https://docs.oracle.com/en-us/iaas/Content/File/home.htm
Expected outcome: Network path exists for compute → mount target over NFS.
Step 4: Launch a Linux compute instance
- Go to Compute → Instances → Create instance
- Name:
vm-fss-client-1 - Image: Oracle Linux (or another supported Linux)
- Shape: choose a small/low-cost shape suitable for labs
- Networking: – Put it in the public subnet for easy SSH (lab only) – Ensure it has a public IP
- Add your SSH public key.
Expected outcome: A running VM reachable via SSH.
SSH into the instance:
ssh -i /path/to/private_key opc@<PUBLIC_IP>
Step 5: Create a File Storage file system
- Go to Storage → File Storage
- Click Create file system
- Name:
fss-lab-fs1 - Select the same compartment as your lab resources.
- Select the appropriate availability domain (if prompted).
Expected outcome: A file system resource exists.
Step 6: Create a mount target in the private subnet
- In Storage → File Storage, go to Mount targets
- Click Create mount target
- Name:
fss-lab-mt1 - VCN:
vcn-fss-lab - Subnet: choose the private subnet
- (Optional) Assign an NSG if you’re using NSGs.
Record the mount target’s private IP address.
Expected outcome: A mount target exists with a private IP reachable from your compute instance subnet.
Step 7: Create an export (export path)
- Navigate to your mount target and find its Export set.
- Create an export:
– File system:
fss-lab-fs1– Export path:/shared– Export options / rules:- Allow source: your compute subnet CIDR (or the specific VM IP for tighter scope)
- Access: Read/Write for the lab
- Identity squashing options: choose defaults unless you have a specific reason—verify meanings in docs.
Expected outcome: An export exists and is associated with the mount target.
Step 8: Mount the file system on the compute instance
On the VM, install NFS utilities (package name varies by distro).
Oracle Linux / RHEL-like:
sudo dnf -y install nfs-utils
Create a mount directory:
sudo mkdir -p /mnt/fss
Mount using the mount target private IP and export path.
sudo mount -t nfs <MOUNT_TARGET_PRIVATE_IP>:/shared /mnt/fss
If your environment requires specifying an NFS version or options, use the syntax recommended in OCI docs for File Storage mounting. For example, you might need options like vers=3 or vers=4 depending on supported versions—verify in official docs before enforcing a specific version:
# Example only - verify required options in OCI docs
sudo mount -t nfs -o vers=3 <MOUNT_TARGET_PRIVATE_IP>:/shared /mnt/fss
Expected outcome: The filesystem is mounted at /mnt/fss.
Step 9: Create files and validate read/write
Check mount:
mount | grep fss
df -h | grep /mnt/fss
Write a test file:
echo "hello from OCI File Storage" | sudo tee /mnt/fss/hello.txt
sudo ls -l /mnt/fss
sudo cat /mnt/fss/hello.txt
If you have a second instance, mount the same export and confirm the file is visible from both clients.
Expected outcome: You can read and write files, and changes persist across clients.
Validation
Use the following checks:
-
Connectivity – From the compute VM, confirm the mount target is reachable (ICMP may be blocked; TCP test is better):
bash # If nc is available: nc -vz <MOUNT_TARGET_PRIVATE_IP> 2049Ifncis not installed:bash sudo dnf -y install nmap-ncat nc -vz <MOUNT_TARGET_PRIVATE_IP> 2049 -
Mount status
bash mount | grep /mnt/fss -
File operations
bash (cd /mnt/fss && sudo touch validation-$(date +%s).txt && ls -l)
Troubleshooting
Common issues and realistic fixes:
-
mount.nfs: Connection timed out– Cause: routing or security rules block NFS traffic. – Fix:- Confirm compute subnet route table can reach mount target subnet (same VCN should work by default).
- Confirm NSG/security list allows required NFS ports from compute to mount target.
- Confirm mount target is in the correct subnet and has correct private IP.
-
mount.nfs: access denied by serverorPermission denied– Cause: export options/rules do not allow your client IP/CIDR. – Fix:- Update export options to include the client subnet or specific client IP.
- Confirm you are mounting the correct export path.
-
You can mount but cannot write – Cause: export is read-only or POSIX permissions prevent writing. – Fix:
- Ensure export options allow read/write.
- Check directory ownership and permissions:
bash ls -ld /mnt/fss sudo ls -ld /mnt/fss - Create a directory and set appropriate ownership for your app user.
-
Performance seems slow – Cause: workload pattern, instance sizing, network path, or service limits. – Fix:
- Confirm instance shape and network throughput.
- Check Monitoring metrics for the file system (exact metrics: verify).
- Avoid tiny synchronous IO patterns; batch writes when possible.
Cleanup
To avoid ongoing costs, delete resources in reverse order:
-
On the compute VM:
bash sudo umount /mnt/fss -
In OCI Console: – Delete the export – Delete the mount target – Delete the file system – Terminate the compute instance – Delete the VCN (if it was created only for this lab)
Expected outcome: No remaining billable File Storage resources.
11. Best Practices
Architecture best practices
- Design for shared dependency: treat File Storage as a tier that can affect many systems. Document dependencies and blast radius.
- Separate subnets: put mount targets in private subnets; restrict client access.
- Multiple mount targets (if recommended): consider deploying mount targets per AD or per subnet segment for resilience and locality—verify OCI guidance.
- Use Object Storage for cold data: store long-term archives in Object Storage and keep File Storage for active/shared file workloads.
IAM/security best practices
- Use compartments to separate environments.
- Apply least privilege:
- Storage admins manage file storage resources.
- App teams get only what they need.
- Prefer dynamic groups + instance principals for automation over long-lived user API keys (where appropriate).
- Use tags for ownership and lifecycle automation.
Cost best practices
- Implement retention: delete obsolete artifacts and user uploads as appropriate.
- Control snapshot/clone growth (if used).
- Track spend with tags and budgets.
- Avoid cross-region data movement unless necessary for DR.
Performance best practices
- Use appropriate client mount options recommended by OCI docs.
- Avoid chatty metadata operations when possible (workload dependent).
- Keep clients close (same region/VCN). Hybrid mounts can be latency sensitive.
- Test with representative workload patterns before production cutover.
Reliability best practices
- Plan for instance failures: mount in boot scripts or systemd, but handle mount delays gracefully.
- Consider multi-AD patterns if required.
- Use application-level resilience: retries, backoff, and proper error handling.
Operations best practices
- Monitor:
- File system metrics (throughput, latency, etc.—verify available metrics)
- Client-side metrics (CPU iowait, mount errors)
- Alert on:
- sudden latency spikes
- mount failures
- capacity growth anomalies
- Runbooks:
- how to add/remove client access
- how to rotate subnets/NSGs
- how to restore data from snapshots/backups (based on your chosen method)
Governance/tagging/naming best practices
- Naming:
fss-<env>-<app>-fs-<id>fss-<env>-<app>-mt-<id>- Tagging:
env=dev|test|prodowner=<team>cost-center=<id>data-classification=public|internal|confidential|restricted
12. Security Considerations
Identity and access model
Security is layered:
- OCI IAM controls who can: – Create/delete file systems – Create/delete mount targets and exports – Modify export rules
- Network security controls which clients can reach the mount target: – NSGs/security lists – Route tables and segmentation
- Export options restrict which client sources can mount and what permissions they have.
- POSIX permissions and ownership inside the filesystem control file-level access.
Encryption
- At rest: File Storage is typically encrypted at rest by default using OCI-managed encryption.
- In transit: NFS traffic is not inherently encrypted like HTTPS. To protect data in transit:
- Keep traffic on private networks
- Consider private connectivity (VPN/FastConnect) for hybrid
- Consider host-based encryption approaches if required by policy (application-level encryption, OS-level encryption, or secure tunnels). Validate what is supported and appropriate for your environment.
Network exposure
- Do not expose NFS to the public internet.
- Place mount targets in private subnets.
- Restrict inbound rules to only client subnets/NSGs that require access.
- Prefer “deny by default” patterns.
Secrets handling
- Avoid storing secrets in File Storage as plaintext.
- Use OCI secrets management (OCI Vault / Secrets) for credentials and keys.
- If you must store sensitive files, enforce strict permissions and consider encryption at the application layer.
Audit/logging
- Use OCI Audit to track administrative actions on File Storage resources.
- Use Logging/Monitoring for operational signals (exact integrations vary—verify).
- Maintain change control on export rules and subnet/NSG changes (these are security-sensitive).
Compliance considerations
- Classify data (PII/PHI/etc.) and apply:
- least privilege access
- restricted export rules
- private connectivity
- retention and deletion policies
- If you have regulatory obligations, confirm OCI compliance documentation and service eligibility for your program (HIPAA, PCI, etc.)—this is organization-specific and region-specific.
Common security mistakes
- Allowing
0.0.0.0/0access to mount target NFS ports. - Export rules that allow broad CIDR ranges unnecessarily.
- Using one shared export for multiple environments (dev/test/prod) without isolation.
- Not tracking changes to export rules and security lists.
- Over-permissive POSIX permissions (
chmod 777) in shared areas.
Secure deployment recommendations
- Put mount targets in private subnets.
- Use NSGs for tighter, instance-level control.
- Export only what you need; keep paths minimal and purpose-built.
- Use separate file systems/exports for different data classifications.
- Monitor for unexpected access patterns (client-side logs + OCI governance).
13. Limitations and Gotchas
Because limits and behaviors can change, treat these as planning checkpoints and verify specifics.
Known limitation patterns (verify exact details)
- NFS protocol constraints: Supported NFS versions and features (locking, ACL support, etc.) may differ; validate for your client OS and workload.
- Linux-first mounting: Many cloud NFS services are primarily targeted at Linux/UNIX clients; other OS support depends on client tooling and protocol compatibility.
- Mount target networking: Mount target is reachable only via networks that can route to its subnet; cross-VCN requires peering/DRG and security alignment.
- Hybrid latency: On-prem mounts over VPN/FastConnect can be sensitive to latency and jitter; test before committing.
- Permissions model complexity: Export rules + POSIX permissions + application user IDs must align. UID/GID mismatches across hosts are a classic gotcha.
- Snapshot/clone billing growth: Retained snapshots/clones can increase billed usage; implement retention governance.
- Service limits: Number of resources per compartment/AD, and any throughput or concurrency limits—check Service Limits.
Regional constraints
- Not all OCI regions may support identical features.
- Feature rollout can be phased; always confirm in the docs for your target region.
Pricing surprises
- Data egress charges for internet or cross-region transfers.
- Unbounded growth of stored data (especially shared upload folders).
- Snapshot/clone retention if billed as stored data.
Compatibility issues
- Some applications assume specific filesystem features; validate:
- file locking behavior
- rename atomicity expectations
- fsync patterns and performance sensitivity
- Container/Kubernetes integration requires a supported driver approach; verify current OCI recommendations.
Migration challenges
- Migrating from on-prem NFS requires careful planning:
- UID/GID mapping
- preserving permissions and timestamps
- handling open files and cutover windows
- Use tools like
rsyncfor file-level migration, but test for correctness and performance.
Vendor-specific nuances
- OCI’s model uses mount targets and export sets; this is different from AWS EFS, Azure Files, and Google Filestore naming and configuration patterns.
- Don’t assume the same mount options and semantics apply across providers.
14. Comparison with Alternatives
File Storage is one option in OCI Storage. Choose based on access pattern (file vs block vs object), sharing requirements, and protocol needs.
Comparison table
| Option | Best For | Strengths | Weaknesses | When to Choose |
|---|---|---|---|---|
| Oracle Cloud File Storage | Shared POSIX-like file access, NFS-style workloads | Managed service, shared mounts, integrates with VCN/IAM | NFS constraints; needs careful network/export rules; not object-native | Multiple instances need shared filesystem paths |
| OCI Block Volumes | Single-instance block storage (databases, low-latency block IO) | Predictable block semantics, attach/detach, good for databases | Typically not shared read/write across many instances without clustering | You need block devices for a VM/BM or clustered FS |
| OCI Object Storage | Unstructured data, archival, data lakes, static content via HTTP | Massive scale, lifecycle tiers, API-first, cost-effective for cold data | Not POSIX filesystem; app changes needed | You can use object APIs and want tiering/lifecycle management |
| Local NVMe (instance storage) | Ultra-fast temporary scratch | Very low latency/high IOPS | Data not durable across instance lifecycle; not shared | High-performance scratch space with ephemeral data |
| Self-managed NFS on Compute | Full control, custom configs | Customizable, can use special NFS features | You manage HA, patching, scaling, failures | You have strict requirements not met by managed service |
| AWS EFS (other cloud) | NFS shared storage in AWS | Mature managed NFS ecosystem | Different cloud; migration complexity | You’re standardizing on AWS |
| Azure Files (other cloud) | SMB/NFS shares in Azure | Strong Windows/SMB integration (depending on tier) | Different semantics/options; different cost model | Windows-heavy or Azure-first environments |
| Google Filestore (other cloud) | Managed NFS in GCP | Good integration with GCP | Different service tiers and limits | GCP-first environments |
15. Real-World Example
Enterprise example: Shared application content for a multi-tier enterprise app
- Problem: A legacy enterprise application runs on multiple application servers and requires a shared directory for uploads, generated reports, and shared configuration. On-prem it used an NFS cluster.
- Proposed architecture:
- OCI Compute instance pool for app servers across fault domains (and possibly ADs)
- Oracle Cloud File Storage for
/appdata - Mount targets in private subnets; NSGs restrict NFS to app tier only
- OCI Load Balancer in front of app servers
- Monitoring alarms on file storage metrics and client-side iowait
- DR plan: scheduled file sync to Object Storage or File Storage replication if supported (verify), plus infrastructure as code
- Why File Storage was chosen:
- Minimizes app changes (still mounts a shared filesystem)
- Removes need to operate NFS servers and HA clusters
- Integrates with OCI networking and IAM governance
- Expected outcomes:
- Reduced ops burden (no NFS server patching)
- Consistent shared app state across servers
- Better auditability of storage administration changes
Startup/small-team example: Shared build artifacts and web assets
- Problem: A small team runs a few VMs for a web app and wants a shared directory for build artifacts, static assets, and logs during troubleshooting.
- Proposed architecture:
- 2–3 small OCI Compute instances
- One File Storage file system mounted on all instances at
/srv/shared - Export rules allow only the compute subnet
- Basic alarms for sudden growth
- Nightly rsync of critical artifacts to Object Storage
- Why File Storage was chosen:
- Simple to mount and use like a normal filesystem
- Avoids running an extra “storage VM”
- Expected outcomes:
- Faster deployments (shared assets path)
- Fewer moving parts
- Controlled costs by keeping stored data small and archiving to Object Storage
16. FAQ
1) What is Oracle Cloud File Storage?
A managed OCI Storage service that provides shared file systems accessible over the network (typically via NFS) from instances and other clients in your OCI network.
2) Is File Storage the same as Object Storage?
No. File Storage is a filesystem (directories/files, NFS mounts). Object Storage is an object store accessed via API/HTTP with buckets and objects.
3) Can multiple compute instances mount the same File Storage export?
Yes—shared access is a primary use case. Ensure export rules and POSIX permissions are designed for multi-client access.
4) Does File Storage support Windows/SMB?
File Storage is generally positioned as NFS-based. If you need SMB, evaluate other approaches (such as Windows file services on compute or other OCI options). Verify current protocol support in OCI docs.
5) Is File Storage encrypted at rest?
Typically yes, using OCI’s standard encryption-at-rest mechanisms. Verify any customer-managed key options in the current documentation.
6) Is File Storage encrypted in transit?
NFS traffic is not inherently encrypted like HTTPS. Protect it using private networking and, if required, additional encryption methods (tunnels/app-level). Verify best practices for your compliance needs.
7) Do I need a public IP to use File Storage?
No. Mount targets are typically private IPs in a subnet; clients access them through the VCN (or connected private networks).
8) What are mount targets and why do I need them?
A mount target is the network endpoint (private IP) used by clients to mount your exported file system. It ties File Storage into your VCN networking model.
9) How do export rules work?
Export rules/options define which client sources can mount an export and what access (read-only/read-write) they have. Exact options vary—verify in docs.
10) How do I restrict access to only one application subnet?
Use NSGs/security lists to allow NFS only from that subnet/NSG, and configure export rules to match only that subnet CIDR.
11) Can I access File Storage from on-premises?
Yes, via private connectivity (IPSec VPN or FastConnect) if routing and security rules permit it. Expect latency sensitivity and test thoroughly.
12) How do snapshots affect cost?
If snapshots are supported and billed as stored capacity, keeping many snapshots can increase your bill. Confirm snapshot billing behavior on the pricing page.
13) How do I back up File Storage?
Options include snapshots (if supported), file-level tools (rsync), or copying important data to Object Storage. Choose based on RPO/RTO and verify OCI’s current recommended approach.
14) What’s the difference between File Storage and Block Volumes?
Block Volumes provide block devices attached to instances (like disks). File Storage provides shared filesystem access over the network.
15) What are the most common causes of mount failures?
- Missing NFS port rules in NSGs/security lists
- Export rules not allowing the client subnet/IP
- Routing misconfiguration (peering/DRG)
- Wrong export path or mount target IP
- Missing NFS client utilities on the VM
16) Can I use File Storage with Kubernetes?
Often yes, via an NFS-based approach. Implementation depends on your Kubernetes environment and supported CSI/drivers. Verify current OCI guidance for OKE and persistent shared storage.
17) How do I estimate performance?
Review OCI’s current File Storage performance documentation, understand workload IO patterns, and test with representative load. Monitor both server-side metrics and client-side iowait.
17. Top Online Resources to Learn File Storage
| Resource Type | Name | Why It Is Useful |
|---|---|---|
| Official documentation | OCI File Storage documentation home: https://docs.oracle.com/en-us/iaas/Content/File/home.htm | Primary reference for concepts, limits, and step-by-step procedures |
| Official documentation | OCI File Storage overview (navigate within docs): https://docs.oracle.com/en-us/iaas/Content/File/Concepts/filestorageoverview.htm | Explains components (file systems, mount targets, exports) |
| Official documentation | OCI IAM policy reference: https://docs.oracle.com/en-us/iaas/Content/Identity/Reference/policyreference.htm | Required to write correct least-privilege policies |
| Official docs/tools | OCI CLI documentation: https://docs.oracle.com/en-us/iaas/tools/oci-cli/latest/ | Automate provisioning and operations |
| Official pricing | Oracle Cloud price list: https://www.oracle.com/cloud/price-list/ | Authoritative pricing references by service and region |
| Official pricing | OCI Cost Estimator: https://www.oracle.com/cloud/costestimator.html | Build a region-specific estimate without guessing |
| Official free tier | Oracle Cloud Free Tier: https://www.oracle.com/cloud/free/ | Check whether any File Storage usage qualifies (often not) |
| Official architecture | Oracle Cloud Architecture Center: https://www.oracle.com/cloud/architecture-center/ | Reference architectures and patterns (validate File Storage-specific examples) |
| Official governance | OCI Audit documentation (navigate from OCI docs): https://docs.oracle.com/en-us/iaas/Content/Audit/home.htm | Understand auditing of API actions on storage resources |
| Community (high-level) | Oracle Cloud community and blogs (verify accuracy vs docs): https://blogs.oracle.com/cloud-infrastructure/ | Practical articles and updates; confirm details in official docs |
18. Training and Certification Providers
| Institute | Suitable Audience | Likely Learning Focus | Mode | Website URL |
|---|---|---|---|---|
| DevOpsSchool.com | Beginners to working professionals | DevOps, cloud operations, automation fundamentals that apply to OCI storage deployments | Check website | https://www.devopsschool.com/ |
| ScmGalaxy.com | Students and engineers | SCM, DevOps, CI/CD practices that pair with shared storage use cases | Check website | https://www.scmgalaxy.com/ |
| CLoudOpsNow.in | Cloud engineers and ops teams | Cloud operations practices (monitoring, governance, cost awareness) | Check website | https://www.cloudopsnow.in/ |
| SreSchool.com | SREs and reliability engineers | Reliability engineering, incident response, monitoring practices relevant to shared storage dependencies | Check website | https://www.sreschool.com/ |
| AiOpsSchool.com | Ops teams and platform engineers | AIOps concepts (observability, automation) applicable to storage monitoring and anomaly detection | Check website | https://www.aiopsschool.com/ |
19. Top Trainers
| Platform/Site | Likely Specialization | Suitable Audience | Website URL |
|---|---|---|---|
| RajeshKumar.xyz | DevOps/cloud training content (verify current offerings) | Engineers seeking practical training resources | https://rajeshkumar.xyz/ |
| devopstrainer.in | DevOps training and mentoring (verify specific OCI coverage) | Beginners to intermediate DevOps learners | https://www.devopstrainer.in/ |
| devopsfreelancer.com | Freelance DevOps help/training resources (treat as a platform; verify offerings) | Teams needing hands-on assistance | https://www.devopsfreelancer.com/ |
| devopssupport.in | DevOps support/training services (verify scope) | Operations teams and engineers | https://www.devopssupport.in/ |
20. Top Consulting Companies
| Company Name | Likely Service Area | Where They May Help | Consulting Use Case Examples | Website URL |
|---|---|---|---|---|
| cotocus.com | Cloud/DevOps consulting (verify service catalog) | Architecture review, migration planning, operations setup | Designing secure mount target networking; defining IAM policies; migration runbooks | https://cotocus.com/ |
| DevOpsSchool.com | DevOps and cloud consulting/training | Implementation guidance, best practices, enablement | Setting up IaC for File Storage + VCN; operational monitoring; team enablement workshops | https://www.devopsschool.com/ |
| DEVOPSCONSULTING.IN | DevOps consulting (verify service catalog) | CI/CD, automation, cloud operations | Automating provisioning and cleanup; integrating shared storage into deployment workflows | https://www.devopsconsulting.in/ |
21. Career and Learning Roadmap
What to learn before File Storage
- OCI fundamentals:
- Compartments, IAM users/groups, policies
- Regions and availability domains
- Networking:
- VCN, subnets, route tables
- Security lists and NSGs
- Private connectivity basics (VPN/FastConnect) for hybrid patterns
- Linux basics:
- Filesystem permissions (UID/GID, chmod/chown)
- NFS client tools and mounting
What to learn after File Storage
- Infrastructure as Code:
- Terraform for OCI (network + storage + compute)
- CI pipelines for IaC validation and drift detection
- Observability:
- OCI Monitoring alarms
- Central logging patterns and client-side telemetry
- Backup/DR:
- Snapshot strategies (if supported)
- File-level backups to Object Storage
- Cross-region DR patterns
- Security:
- Zero-trust network segmentation
- Continuous compliance checks (Cloud Guard and policies—verify applicability)
Job roles that use it
- Cloud Engineer / Platform Engineer
- DevOps Engineer
- Site Reliability Engineer (SRE)
- Solutions Architect
- Systems Administrator (Linux)
- Security Engineer (for access control reviews)
Certification path (if available)
Oracle certification programs change over time. Look for current OCI certifications and training paths on Oracle University and official Oracle certification pages. Start with OCI foundations and then progress to architect or operations tracks. – Verify current certification options in official Oracle training resources.
Project ideas for practice
- Build a “shared uploads” tier for a two-instance web app.
- Automate File Storage provisioning with Terraform and enforce tags + naming policy.
- Create a migration plan: rsync on-prem NFS → OCI File Storage with a cutover window and validation checklist.
- Implement monitoring: alarms for unusual growth and client mount failures.
- Design a DR workflow: periodic sync to Object Storage and a restore runbook.
22. Glossary
- AD (Availability Domain): A physically isolated data center within an OCI region (in regions that use ADs).
- Compartment: A logical container in OCI IAM for organizing and isolating resources.
- Export: A path (like
/shared) that maps to a file system and is made available via a mount target. - Export options / rules: Settings that control which clients can mount an export and their access rights.
- File system: The managed storage resource holding directories and files.
- Mount target: The private endpoint in your subnet used to access File Storage.
- NFS: Network File System protocol used to mount shared file systems over a network.
- NSG (Network Security Group): Virtual firewall rules applied to VNICs/resources for fine-grained security.
- POSIX permissions: Unix-like permissions model (user/group/other with read/write/execute).
- Private subnet: A subnet without direct public internet exposure (instances typically have no public IP).
- Security list: Subnet-level virtual firewall rules in OCI.
- VCN (Virtual Cloud Network): Your private network in OCI.
- DR (Disaster Recovery): Processes and architecture enabling recovery from region/zone failures.
23. Summary
Oracle Cloud File Storage is OCI’s managed shared file system service in the Storage category, designed for workloads that need NFS-style mounts and POSIX-like file semantics. It fits best when multiple compute instances (or connected private networks) must read and write to the same directory tree without running and maintaining NFS servers.
Cost is primarily driven by stored capacity (and potentially snapshots/clones if used), plus any data transfer charges for cross-region or internet egress. Security hinges on layered controls: IAM for administration, VCN network rules for reachability, export rules for allowed clients, and POSIX permissions for file-level access.
Use File Storage when you need shared filesystem semantics; choose Block Volumes for block-level needs and Object Storage for API-native, massively scalable unstructured data. Next step: implement the same lab using Terraform and add monitoring/alarms plus a backup/restore runbook aligned to your organization’s RPO/RTO requirements.