Skip to content
CloudPlay
  • Pages
    • Project Overview
      • Product roadmap
    • Documentation
      • Transformation
      • Server Requirements
      • System Requirements
      • icon picker
        Own Servers vs Cloud Rental
      • Design Related
      • Cloud GPUs
      • AWS Cloud Gaming
      • Harvester Integration Node
      • Operational Costs
    • Team
    • Tasks
    • Funding
    • Funding Cadence
      • Month 1
      • Month 2
      • Month 3
      • Month 4
      • Month 5
      • Month 6
      • Month 7
      • Month 8
      • Month 9
    • Updates
    • Discussions

Own Servers vs Cloud Rental

When considering the infrastructure for a cloud gaming platform, we have two main options:
Deploying Own Servers (On-Premises)
Renting Virtual Machines in the Cloud

Option 1: Deploying Own Servers

Example Setup:
1.1 Hardware: NVIDIA RTX Blade Server
Specifications:
Blades/Nodes: 10 Twin Node Blades.
Form Factor:
Size: 8U Rack Unit (17.6’’ x31.3’’ x 10.82’’)
Weight: 300lbs.
Power Consumption:
Max System Power: 6.5 KW.
Average Power Usage: 5 KW.
Networking:
Dual 10 GbE per node.
4x 40GbE connections for the entire system.
add-node
Node Specifications (per Twin Node)
CPU: 1x Intel Core i9 (8 core, 16 Threads)
Memory: 64GB DDR4 RAM
Storage: 1TB M.2 NVMe SSD
GPU: 2x NVIDIA Turing GPUs
Node Power Consumption: 450W
GPU Specification:
Model: NVIDIA RTX GPU (2000 series)
vGPU Software: GRID vGaming for virtualization.
CUDA Cores: 3,584 per GPU
Memory: 16GB GDDR6
TDP: 150 Watts per GPU
Power And Network Requirements:
Power:
Total Power Consumption:
Average: 5KW.
Max: 6.5KW.
Power Per User (Average):
5000Watts/160Users = 31.25 Watts/Users.
Networking:
Dual 10 GbE connections per node provide sufficient bandwidth for low-latency streaming.
Each user requires ~15-25Mbps for 1080p gaming, meaning the setup can support the bandwidth needs of 160 users.
Performance Analysis
Concurrent User Capability
Per Node:
Each node has 2 NVIDIA GPUs.
Assuming 8 concurrent users per GPU (average for GRID vGaming with AAA games), each node can handle 16 concurrent users.
For 10 Nodes:
Total GPUs =20.
20 GPUs x 8 users/GPU = 160 concurrent users.
Scalability
This configuration is scalable, adding more nodes or blades increases concurrent user capacity.
Can integrate into exiting data center infrastructure of edge locations.
Cost Estimate
Hardware Costs
Per Blade: ~$20,000-$25,000
Total: ~$200,000-$250,000 for 10 nodes.
Operational Costs (Annual)
Power Cost
Average Power: 5KW x 24 hrs/day x 365 days/year = 43,800 kWh/year.
At $0.12/kWh, annual cost = ~$5,256.
Cooling Costs:
Assume cooling is 30% of power costs: ~ $1,577/year
Maintenance: ~$10,000/year for hardware repairs and software licensing (GRID vGaming).
Total Operating Cost: ~ $17,000/year.
1.2 Hardware: GFN Setup
Configuration for 16 Users Per Server
CPU: AMD Ryzen 16-core CPU.
Allocates 4 physical cores per user.
Requires one high-core-count CPU or dual-CPU setup.
RAM: 224 GB RAM.
14 GB/user × 16 users = 224 GB RAM
GPUs: 16 NVIDIA RTX 3060 GPUs.
1 GPU per user.
Storage:
M.2 NVMe SSDs for low-latency game loading.
At least 2 TB per server for game installations.
Power Requirements:
RTX 3060 TDP: 170 W/GPU.
CPU and other components: ~300 W/server.
Total: 16 x 170 + 300 = 3,020 W (~3KW/server)
computer
Single User Hardware Configuration
CPU:
AMD Ryzen 16-core CPU (likely Ryzen Threadripper or EPYC models).
Each user gets 4 physical cores and 8 logical cores via hyper-threading.
RAM:
14 GB RAM allocated per user.
GPU:
NVIDIA RTX 3060 (12 GB VRAM).
Assumes 1 GPU per user without virtualization or splitting.
Scaling to 160 Users
For 160 users:
Hardware Totals:
CPU: 10 Ryzen 16-core CPUs.
RAM: 2,240 GB.
GPU: 160 NVIDIA RTX 3060 GPUs.
Storage: 20 TB NVMe SSDs.
Power Consumption:
10 x 3,020 = 30,200 W (30.2 KW).


Comparing GFN Setup with Blade Server
Feature
GFN Setup (RTX 3060)
NVIDIA Blade Server (vGPU)
GPU Usage
Dedicated (1 GPU/user)
Shared (1 GPU for ~ 8 users)
User Density
~ 16 user/server
~ 160 users/server
Performance
Superior (dedicated resources)
Moderated(shared resourceS)
Cost Efficiency
High cost per user
Lower cost per user
Scalability
Moderate (more hardware needed)
High(add more nodes/blades)
There are no rows in this table
Pros
Cost Efficient at Scale
Lower long-term costs when serving a large, consistent user base.
Fixed expensive after initial investment, with no variable billing based on usage.
Full Control
Complete customization of hardware and software.
Easier to meet specific requirements (e.g., GPU types, networking).
Consistent Performance
Dedicated resources ensure consistent performance without noisy neighbors.
Data Privacy and Security
Greater control over sensitive user data and compliance.
No reliance on third-party infrastructure for data storage.
Predictable Costs
Cons
High Unfront Cost
Significant initial investment in hardware, networking, and setup.
Example: $150,000-$200,000 for a high-performance NVIDIA Blade Server.
Scaling Limitations
Adding capacity requires buying more hardware and time for procurement and installation.
Difficult to scale quickly for unexpected demand.
Maintenance Burden
Requires in-house stuff or contracts for hardware repairs, software updates, and general maintenance.
Unexpected downtime can be challenging to address quickly.
Geographic Constraints
Limited to the locations where servers are physically installed.
Additional costs to deploy in multiple regions for low latency.
Ongoing Operational Costs
High power and cooling requirements, especially for GPU-heavy setups.

Option 2: Renting Virtual Machines in the Cloud

Example Setup:
Provider: AWS, Google Cloud, or Microsoft Azure.
Specifications (example instance):
AWS EC2 G4 Instance (g4dn.2xlarge):
GPU: NVIDIA T4 Tensor Core GPUs
CPU: 8 core vCPU (Intel Xeon Cascade Lake)
RAM: 32 GB
Storage: EBS (Elastic Block Storage).
Estimated Costs:
Hourly Rate: ~$0.7520 for a single G4 instance (AWS)
Monthly Cost (24/7): ~$537 per instance.
Scaling: 100 instance (~53,700/month) for ~5,000 users.
Pros:
1. Flexibility:
Pay-as-you-go pricing: start small and scale as needed.
Easy to deploy in multiple regions for low latency.
2. Speed to Market
No need to wait for hardware procurement or setup.
3. Reduce Maintenance:
Cloud providers handle hardware upgrades, repairs, and availability.
Cons:
1. Higher Costs at Scale:
Long-term costs can exceed owning servers due to per-hour billing.
2. Performance Variability:
Shared infrastructure can introduce variability in performance.
3. Potential Vendor Lock-In
Difficult to migrate to another provider due to proprietary service and APIs.
Risk of price hikes or unfavorable changes in term.


Comparision Summary
Feature
Deploy Own Servers
Rent Cloud VMs
Upfront Cost
High
None
Recurring Costs
Low (e.g., maintenance)
High (usage-based fees)
Scaling
Slower (requires new hardware)
Instant
Control
Full (customization, data privacy)
Limited
Setup Time
Months (procurement, installation)
Minutes
Performance
Consistent (dedicated resources)
May vary (shared resources)
There are no rows in this table

 
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.