Fully Customizable Server Solutions

With over 1500 base systems waiting to be customized, Novarion offers a huge variety of server systems, serving small business needs to high performance computing in areas such as artificial intelligence or quantum computing.

Create your own server using our configurator and receive an offer shortly.

*due to the current chip crisis, prices and availability of components may change. For more information and technical support, please contact our system architects via the contact form (at the bottom of the page) or by phone.

Quanton Overview

AMD Based Servers

Highperformance Computing with AMD Epyc
  • CPU Dual AMD Epyc up to 64 Cores per CPU
  • RAM Up to 8TB DDR4 3200Mhz
  • Size Up to 8 Units (Twin Server)

Built on the x86 architecture innovations of the record setting EPYC 7002 processors, AMD EPYC™ 7003 series processors are the new standard for the modern data center1. With high frequencies, high core counts, high memory bandwidth and capacity, and up to 32MB of L3 cache per core, AMD EPYC 7003 processors enable exceptional HPC performance.

Intel Based Servers

Highperformance Computing with the XEON Scaleable Family
  • CPU Dual Intel Xeon Platinum up to 56 Cores per CPU
  • RAM Up to 4TB DDR4 3200MHz for Dual CPU Systems
  • Size Up to 8 Units (Twin Server)

Scalable Intel® Xeon® processors deliver industry-leading, workload-optimized performance features with built-in AI acceleration. They provide a perfectly tuned performance foundation to accelerate the transformative impact of data from the network edge to the cloud.

Optimized Chassis Solution with Supermicro

• System Level Optimized Solutions
• Full Range of Rackmount, Workstation & Tower Chassis – mini-1U, 1U, 2U, 3U & 4U
• Support SAS3 12Gb/s & NVMe technology
• Support the latest X12 generation motherboards and 3rd Generation Intel® Xeon® Scalable processors
• Optimized thermal design: better cooling performance, less power consumption
• Up to Titanium Level (96%) High-Efficiency Power Supplies
• High Availability, Adaptability, Scalability and Reliability
• Outstanding Price/Performance Ratio
• Highest Quality Components, Fans & Power Supplies
• Hot-Swappable Redundant Power Supplies, Fans and Drive Trays
• Maximum Storage Capacity

High Performance GPU Based Systems

Novarions Highperformance GPU Servers let you accelerate your most demanding HPC and hyperscale workloads in the data center. Data scientists and researchers can now parse petabyte-scale data sets much faster than with traditional CPUs.
Applications range from energy research to deep learning.

NVIDIA GPUs deliver tremendous performance to compute more extensive simulations faster than ever. In addition, NVIDIA GPUs deliver the highest performance for virtual desktops, applications, workstations and supporting high user density.


NVIDIA NVLink

Source: https://www.nvidia.com/de-de/data-center/nvlink/

NVIDIA® NVLink® is a direct high-speed connection between GPUs. NVIDIA NVSwitch™ takes interconnectivity to the next level by integrating multiple NVLinks to enable all-to-all GPU communication at full NVLink speed within a single node such as NVIDIA HGX™ A100.

With the combination of NVLink and NVSwitch, NVIDIA was able to efficiently scale AI performance across multiple GPUs and win MLPerf 0.6 – the first industry-wide AI benchmark.

NVIDIA NVSwitch

Source: https://www.nvidia.com/de-de/data-center/nvlink/

With the rapid spread of deep learning, the need for faster and more scalable networking has also increased. This is because PCIe bandwidth often proves to be a bottleneck for multi-GPU systems. Scaling deep learning workloads requires significantly higher bandwidth and lower latency.

NVIDIA NVSwitch relies on NVLink’s advanced communication capability to solve this problem. For even higher deep learning performance, a GPU fabric supports more GPUs on a single server, networked together through full bandwidth connections. Each GPU has 12 NVLinks to the NVSwitch to enable high-speed, all-around communication.

Source: https://www.nvidia.com/de-de/data-center/a100/

High-Performance Computing - Nvidia A100

To unlock the next generation of discovery, scientists are looking at simulations to better understand the world around us.

NVIDIA A100 introduces Tensor Cores with twice the precision, representing the biggest leap in performance for HPC since the introduction of GPUs. Combined with 80GB of the fastest graphics memory, researchers can reduce a previously 10-hour, double-precision simulation on A100 to less than four hours. HPC applications can also leverage TF32, achieving up to 11 times higher throughput on dense single-precision matrix multiplication tasks.

For those HPC applications with large data sets, the additional memory of the A100 80 GB provides up to a 2x increase in throughput. The massive memory and unmatched memory bandwidth make the A100 80 GB the ideal platform for next generation workloads.

About AMD Epyc

Built on the x86 architecture innovations of the record setting EPYC 7002 processors, AMD EPYC™ 7003 series processors are the new standard for the modern data center. With high frequencies, high core counts, high memory bandwidth and capacity and up to 32MB of L3 cache per core, AMD EPYC 7003 processors enable exceptional HPC performance.
Along with the high memory bandwidth achieved with support for 8 channels of DDR4-3200 memory, EPYC 7003 processors also synchronize the data fabric clock to match the memory clock speeds, further improving both memory bandwidth and latency. Support for up to 4TB of memory per socket enhances the ability to handle very large datasets. Extra-large L3 cache, reaching up to 256MB per CPU and up to 32MB per core, help to efficiently utilize up to 64 cores per CPU.

About Intel

Platform innovations and hardware-enhanced virtualization of compute, networking and mass storage infrastructure with support for a new class of innovative storage that enables cost-effective, flexible and scalable edge-to-cloud environments to deliver consistent high-quality experiences in business-to-business and business-to-consumer environments.
Scalable Intel® Xeon® processors deliver industry-leading, workload-optimized performance features with built-in AI acceleration. They provide a perfectly tuned performance foundation to accelerate the transformative impact of data from the network edge to the cloud.

*The systems shown serve as a basis and can be regarded as an example system.
Depending on the desired configuration, we offer customized systems that correspond exactly to your ideas.
If you have any questions regarding the systems or have more specific requirements, please use the contact form below.


Overview

– Hardware-level root of trust support
– 2U – 4 nodes front access server system
– Dual AMD EPYC™ 7003 series processor family
– 8 x LGA 4094 sockets- 8-Channel RDIMM/LRDIMM DDR4 per processor, 128 x DIMMs
– 8 x 1Gb/s LAN ports (Intel® I350-AM2)
– 4 x Dedicated management ports
– 2 x CMC global management port
– 8 x 2.5″ Gen4 NVMe hot-swappable SSD bays
– 8 x M.2 with PCIe Gen4 x4 interface
– 4 x Low profile PCIe Gen4 x16 expansion slots
– 4 x OCP 3.0 Gen4 x16 mezzanine slots
– Dual 3200W (240V) 80 PLUS Platinum redundant PSU

Intended use of HPC Cluster

High-performance computing (HPC) clusters are used to process computing tasks. These computing tasks are divided among several nodes.

Either the tasks are divided into different packages and executed in parallel on several nodes or the computing tasks (called jobs) are distributed to the individual nodes.

The distribution of the jobs is usually handled by a job management system. HPC clusters are often found in the scientific field. As a rule, the individual elements of a cluster are connected to each other via a fast network. So-called render farms also fall into this category.

Technical side of HPC Cluster

In HPC clusters, the task to be done, the “job”, is often broken down into smaller parts by means of a decomposition program and then distributed to the nodes.
Communication between job parts running on different nodes is usually done using Message Passing Interface (MPI), since fast communication between individual processes is desired. To do this, one couples the nodes with a fast network such as InfiniBand.
A common method for distributing jobs to an HPC cluster is a job scheduling program, which can perform distribution according to different categories, such as Load Sharing Facility (LSF) or Network Queueing System (NQS).
More than 90% of the TOP500 supercomputers are Linux clusters, not least because cheap COTS hardware can be used for demanding computing tasks.

Architecture for Modern Data Centers

– Up to 64 Cores
– 8 Channels of DDR4-3200
– Up to 4TB Memory Capacity
– 128 lanes PCIe 4.0
– 2-way SMT & Turbo Boost
– 4,6,8-channel Memory Interleave
– Synchronized Fabric and Memory Clock Speeds
– Secure Memory Encryption
– Secure Encrypted Virtualization

High Performance

OCP 3.0 Ready

Novarion offers servers that feature an onboard OCP 3.0 slot for the next generation of add on cards.

Advantages of this new type include:

Easy Serviceability – simply slot in or pull out the card, without opening the server or using tools
Improved thermal design – horizontal position and optimal heat sink design allow for air cooling to eliminate the heat efficiently

Power Effiencency

  • Automatic Fan Speed Control

    Novarion servers are enabled with Automatic Fan Speed Control to achieve the best cooling and power efficiency. Individual fan speeds will be automatically adjusted according to temperature sensors strategically placed in the servers.

Immediately Available Systems Overview

On the following page you will find server systems whose components are immediately available and can be ordered on request. Of course, it is possible to customize these systems if needed. If you have special wishes and want to make changes to these systems, please contact us using the contact form below. Our system architects will be happy to help you.

Rackmount Chassis 2U

Chipset: Intel® C621A (Ice Lake) chipset, fully functionable IPMI remote maintenance integrated

CPU: Intel® Xeon® Silver 4310 • 2.10(3.30)GHz • S4189 • 18MB • 12C/24T • 2666MHz • 2xUPI • 120W • max. 6TB

Main Memory: DIMM 16GB DDR4 • 3200MHz • ECC • Registered • SRx4 • 1.2V (Premium tested A-bin package chips for optimized performance)

Controller: Broadcom SAS-3: // Broadcom 9460-8i PCI-E 3.1 (x8) SAS-3/NVME Controller • 2GB • 8-port (8i) • RAID 0/1/5/6/10/50/60 • LP

Hot-Swap SATA SSDs: 480GB SSD • 2.5” • 3D TLC • 7mm • SATA-3 • 1.3DWPD/3Y

Rackmount Chassis 4U

Chipset: Intel® C621A (Ice Lake) chipset, fully functionable IPMI remote maintenance integrated

CPU: Intel® Xeon® Silver 4310 • 2.10(3.30)GHz • S4189 • 18MB • 12C/24T • 2666MHz • 2xUPI • 120W • max. 6TB

Main Memory: DIMM 16GB DDR4 • 3200MHz • ECC • Registered • SRx4 • 1.2V• 1.2V (Premium tested A-bin package chips for optimized performance)

Controller: Broadcom SAS-3: // Broadcom 9460-8i PCI-E 3.1 (x8) SAS-3/NVME Controller • 2GB • 8-port (8i) • RAID 0/1/5/6/10/50/60 • LP

Hot-Swap SATA SSDs: 18TB Festplatte • SATA-3 • 7.200rpm • RE • 512e/4kn • Helium

Rackmount Tower System 4U

Chipset: Intel® C621A (Ice Lake) chipset, fully functionable IPMI remote maintenance integrated

CPU: Intel® Xeon® Silver 4310 • 2.10(3.30)GHz • S4189 • 18MB • 12C/24T • 2666MHz • 2xUPI • 120W • max. 6TB

Main Memory: DIMM 16GB DDR4 • 3200MHz • ECC • Registered • SRx4 • 1.2V (Premium tested A-bin package chips for optimized performance)

Controller: Broadcom SAS-3: // Broadcom 9460-8i PCI-E 3.1 (x8) SAS-3/NVME Controller • 2GB • 8-port (8i) • RAID 0/1/5/6/10/50/60 • LP

Hot-Swap SATA SSDs: 480GB SSD • 2.5” • 3D TLC • 7mm • SATA-3 • 1.3DWPD/3Y

Interested? Contact us!

+43 1 5441159-8004

sales@novarion.com

    Your Name*

    Your Email*

    Which Products are you interested in?

    Your Message*