Skip to main content

Instances

Conveyor supports the following instances types for any jobs:

Instance typeCPUTotal Memory (AWS)Total Memory (Azure)
mx.nano1*0.438 Gb0.375 Gb
mx.micro1*0.875 Gb0.75 Gb
mx.small1*1.75 Gb1.5 Gb
mx.medium13.5 Gb3 Gb
mx.large27 Gb6 Gb
mx.xlarge414 Gb12 Gb
mx.2xlarge829 Gb26 Gb
mx.4xlarge1659 Gb55 Gb
cx.nano1*0.219 GbNot supported
cx.micro1*0.438 GbNot supported
cx.small1*0.875 GbNot supported
cx.medium11.75 GbNot supported
cx.large23.5 GbNot supported
cx.xlarge47 GbNot supported
cx.2xlarge814 GbNot supported
cx.4xlarge1629 GbNot supported
rx.xlarge428 GbNot supported
rx.2xlarge859 GbNot supported
rx.4xlarge16120 GbNot supported
info

(*) These instance types don't get a guaranteed full CPU but only a slice of a full CPU, but they are allowed to burst up to a full CPU if the cluster allows.

The numbers for AWS and Azure differ because nodes on both clouds run different DaemonSets and have different reservation requirements set by the provider. We aim to minimize the node overhead as much as possible while still obeying the minimum requirements of each cloud provider.

Spark resources

When running Spark/PySpark applications, only a part of the total memory for the container is available for Spark itself. The details are described in the following tables:

AWS

Instance typeCPUTotal Memory (AWS)Spark memory (AWS)PySpark memory (AWS)
mx.micro1*0.875 Gb0.8 Gb0.6 Gb
mx.small1*1.75 Gb1.6 Gb1.25 Gb
mx.medium13.5 Gb3.2 Gb2.5 Gb
mx.large27 Gb6.4 Gb5 Gb
mx.xlarge414 Gb12.7 Gb10 Gb
mx.2xlarge829 Gb26.7 Gb21 Gb
mx.4xlarge1659 Gb54 Gb42.4 Gb
cx.medium11.75 Gb1.6 Gb1.25 Gb
cx.large23.5 Gb3.2 Gb2.5 Gb
cx.xlarge47 Gb6.4 Gb5 Gb
cx.2xlarge814 Gb12.7 Gb10 Gb
cx.4xlarge1629 Gb26.7 Gb21 Gb
rx.xlarge828 Gb26 Gb21 Gb
rx.2xlarge1659 Gb54 Gb43 Gb
rx.4xlarge16120 Gb112 Gb88 Gb
info

(*) These instance types don't get a guaranteed full CPU but only a slice of a full CPU, but they are allowed to burst up to a full CPU if the cluster allows.

Azure

Instance typeCPUTotal Memory (Azure)Spark memory (Azure)PySpark memory (Azure)
mx.micro1*0.75 Gb0.69 Gb0.55 Gb
mx.small1*1.5 Gb1.38 Gb1.1 Gb
mx.medium13 Gb2.75 Gb2.15 Gb
mx.large26 Gb5.5 Gb4.3 Gb
mx.xlarge412 Gb11 Gb8.6 Gb
mx.2xlarge826 Gb23.6 Gb18.6 Gb
mx.4xlarge1655 Gb50 Gb35.7 Gb
info

(*) These instance types don't get a guaranteed full CPU but only a slice of a full CPU, but they are allowed to burst up to a full CPU if the cluster allows.

As you can see from the tables, the supported executor memory configs change depending on using regular (Scala) Spark or PySpark. The explanation for this can be found in the spark.kubernetes.memoryOverheadFactor which can be found in the Spark settings. This setting is configured to 0.1 for JVM jobs (Scala and Java Spark), and to 0.4 for non-JVM jobs (PySpark, SparkR). A portion of the memory is set aside for non-JVM things like: off-heap memory allocations, system-processes, Python, R... Otherwise, your job would commonly fail with the error "Memory Overhead Exceeded".

Disk space allocation

When an application saves data to disk, it will by default consume disk space from the host that it is running on. It's important to note that this disk space will be shared across all the jobs that are running on the same physical machine. Applications are unable to read each-others files, but a particularly storage-hungry application might consume all available disk-space, potentially causing issues for other jobs running on the same host machine.

Applications requesting a T-shirt size of mx.xlarge or greater will get the "full" instance assigned. This means that no other applications will be deployed on that instance and will thus not suffer from a "noisy neighbor" problem. Applications running on smaller instance sizes will receive a slice of a physical machine, and share the amount of available disk space (about 50GB of allocatable space).

To avoid this issue, you can provision application-specific storage by specifying the disk_size (and optionally disk_mount_path) when using the ContainerOperator.

Spark applications can make use of the equivalent executor_disk_size when using the SparkSubmitOperator. This setting will provision additional storage for each executor, which will then be automatically used by Spark.