AWS Getting Started - Instance Size and Storage

When creating a SoftNAS instance or instances, there are two primary considerations which are highly intertwined; Instance Size and Storage. The instance size selected will determine factors such as network speed, processing power, memory and performance caching. However, a large instance size will not boost the throughput and performance (or at least not significantly) if you pair it with a cold HDD storage option designed for archiving seldom accessed files. When determining the performance levels you need for your instance, you must factor in the characteristics of the instance size selected and the type of storage being leveraged.

Instance Size

Buurst recommends the r5.2xlarge as a minimum default AWS instance size, the r5.4xlarge for medium workloads, and the r5.12xlarge or r5.24xlarge for heavier workloads. Specific use cases may require a more tailored approach. SoftNAS's flexibility and the number of available instance sizes mean that we can match your specific workload's performance, price, and storage demands. Contact Buurst Support if you require assistance in determining the right size for you.


For additional information on sizing recommendations, see the following:

AWS EC2 System Requirements

General Instance Type Recommendations

For extremely heavy workloads, increase cache memory with "High-Memory Instances" and/or use EBS-Optimized and Provisioned IOPS to provide better control over available IOPS.

Buurst always recommends further analysis and testing of their selected instance until workload characteristics are fully understood. This will allow the customer to then refine their instance size selection to the perfect balance of performance and cost.


SoftNAS product performance and throughput is governed by:

  • Available Memory: SoftNAS uses around 1 GB of RAM for the kernel and system operation. Memory beyond 1 GB is available for use as cache memory, which greatly improves overall system performance and response time - more memory = better performance, to a point. If application workloads involve a high number of small, random I/O requests, then cache memory will provide the best performance increase by reducing random disk I/O to a minimum. If running a SQL database application, cache memory will greatly improve query performance by keeping tables in memory. At a minimum, 2 GB of RAM will yield around 1 GB for cache. For best results on production workloads, start with 16 GB or more RAM. With deduplication, add 1 GB of RAM per terabyte of deduplicated data (to keep deduplication look-up tables in RAM)

  • CPU: SoftNAS needs a minimum of 4 vCPUs for normal operation. To maintain peak performance when using the Compression feature, add CPUs (e.g., 8 vCPU) if CPU usage is observed at 60% or greater on average.

  • Network - In EC2, SoftNAS uses Elastic Block Storage (EBS), which are disks running across the network in a SAN (storage area network) configuration. This means all disk I/O travels across a shared network connecting the EC2 computing instance with the SAN. This makes network I/O an important factor in SoftNAS® environment performance.

  • Multiple Performance & Scale Options: EC2 offers Fixed Performance Instances (e.g. m5 c5, and r5) as well as Burstable Performance Instances for occasional heavy use over baseline. EC2 also offers many instance sizes and configurations. Consider all potential networking requirements when choosing instance type. Purchasing models include On-Demand, Reserved, and Spot Instances.


AWS EC2 Best Practices

To get the best performance out of SoftNAS® in an AWS environment, consult the following best practices: