What’s New With Amazon Redshift WLM?

On Monday, Amazon Redshift announced significant improvements to automatic WLM (workload management) to help optimise performance for the most demanding analytical workloads.

Amazon Redshift has allowed very complex workloads to be run on data clusters for some time now (over 8 years), with more and more of these workloads being sourced from data science and machine learning (ML) methods.

So what’s new with Amazon Redshift? And how can auto WLM aid in a more consistent experience for each of your workloads?

In the words of the AWS Data/Software Engineers as well as Principal Product Manager of Amazon Redshift, Paul Lappas:

“Workload management allows you to route queries to a set of defined queues to manage the concurrency and resource utilization of the cluster. Today, Amazon Redshift has both automatic and manual configuration types. Amazon Redshift Auto WLM doesn’t require you to define the memory utilization or concurrency for queues, instead it adjusts the concurrency dynamically to optimize for throughput. Optionally, you can define queue priorities in order to provide queries preferential resource allocation based on your business priority.”

Shown above is a query movement through Amazon Redshift (with improvements of Auto WLM)

Here’s the key areas of performance updates and improvements (compared to manual WLM):

Proper allocation of memory – Reduction of over-allocation of memory creates more room for other queries to run and increases concurrency. Additionally, reduction of under-allocation reduces spill to disk and therefore improves query performance.

Elimination of static partitioning of memory between queues
– This frees up the entire available memory, which is then available for queries.

Improved throughput – You can pack more queries into the system due to more efficient memory utilization.

Find out more

Be sure to visit the official AWS Big Data blog to learn more about adaptive concurrency within Amazon Redshift Auto WLM. You’ll also be able to view detailed benchmark tests and latency reports, perhaps giving you enough reason to switch from manual WLM to Auto WLM. 👍