AWS Analytics Modernisation Week 2021

We’re quickly approaching July and with it, AWS Analytics Modernisation week. On Monday 19th July, AWS kick off their four day event, designed to help businesses accelerate their learning around data, analytics and machine learning best practices to help execute a modern agile approach to analytics in the cloud.

In this event, sessions will focus on connecting disparate data assets via a Lake House approach, highlighting real-time use cases leveraging both analytics and machine learning to drive better insights as well as showcasing how to build, manage, govern and secure a multitude of data assets.

We’ve made note of the full agenda so you can take a look at what’s on the horizon and register for the day that will have the most benefit to you. Many of Firemind’s team will be participating with the event, especially for some of the talks on Tuesday (you’ll find out why in the agendas below):

Check Your Level

The events and talks are split into session proficiency levels to ensure that you’re not in a talk, scratching your head as the language is deeply technical. The scoring structure below will help you decide what you’ll be comfortable with:

Level 100 (Introductory)

Sessions will focus on providing an overview of AWS services and features, with the assumption that attendees are new to the topic.

Level 200 (Intermediate)

Sessions will focus on providing best practices, details of service features and demos with the assumption that attendees have an introductory knowledge of the topics.

Level 300 (Stepping things up)

Sessions will dive deeper into the selected topic. Presenters assume that the audience has some familiarity with the topic, but may or may not have direct experience implementing a similar solution.

Level 400 (Expert)

Sessions are for attendees who are deeply familiar with the topic, have implemented a solution on their own already and are comfortable with how the technology works across multiple services, architectures and implementations.

Monday 19th July

Analyze Data Across Your Lake House Architecture with Amazon Redshift (Level 300)

9:00 AM – 10:15 AM PT | 12:00 PM – 1:15 PM ET

Organizations can gain deeper and richer insights by bringing together all your relevant data of all structures and types and from all sources to analyze, using Lake House Architecture. You can use Amazon Redshift to query data across Redshift clusters, your S3 data lake, Amazon Aurora, and other operational databases with Redshift Data Sharing, Redshift Spectrum, and Redshift Federated Query. Learn how to enable analytics across a broad range of sources without having to move data physically across different systems.

Enabling Seamless 360-degree View and Sales Planning for a Global Pharmaceutical Company

AWS Partner Agilisium shares how a global pharmaceutical company maximized the impact of their sales and marketing efforts through granular information on product movement and sales progression. Through an integrated solution with S3, Redshift, Spectrum/Athena, and RDS, the company was able to view activities and process logs to boost their marketing and strategic planning, while running the platform in a cost-effective way.

Democratize Machine Learning with Amazon Redshift ML (Level 200)

10:15 AM – 11:00 AM PT | 1:15 PM – 2:00 PM ET

Learn how your data analysts can create, train, and apply machine learning models using familiar SQL commands in Amazon Redshift data warehouses. With Redshift ML, you can take advantage of Amazon SageMaker, a fully managed machine learning service, without learning new tools or languages. Simply use SQL statements to create and train Amazon SageMaker machine learning models using your Redshift data and then use these models to make predictions.

Access Deeper Insights with Machine Learning (Level 100)

11:00 AM – 12:00 PM PT | 2:00 PM – 3:00 PM ET

Learn how your business users can leverage AI/ML capabilities to enhance dashboards and reporting with no coding necessary. Using QuickSight’s built-in anomaly detection, forecasting, natural language processing, and the newly announced QuickSight Q, any business user, regardless of technical skill, can ask deeply analytical questions on all of your data and receive answers in seconds. Additionally, integration with Amazon SageMaker opens doors to bring your own predictive models to enrich your data visualizations.

Tuesday 20th July

Deliver Better Customer Experiences With Machine Learning in Real-Time (Level 200)

9:00 AM – 10:15 AM PT | 12:00 PM – 1:15 PM ET

Organizations are increasingly using machine learning to make near-real-time decisions, such as placing an ad, assigning a driver, recommending a product, or even dynamically pricing products and services. Real-time machine learning can substantially enhance your customers’ experience, resulting in better engagement and retention. In this session, you will learn how you can use AWS data streaming platforms to support real-time machine learning.

We’re especially excited for the event as we begin to harness more ML capabilities within our own client projects. Make sure you are able to attend this one!

Reveal Key Consumer Insights Using Real-Time Sentiment Analysis (Level 300)

10:15 AM – 11:30 AM PT | 1:15 PM – 2:30 PM ET

In this session, we’ll demonstrate how to perform real-time sentiment analysis on top of incoming customer reviews with serverless AWS technologies and natural language processing. You’ll learn how to use Amazon Kinesis Data Streams, Kinesis Data Analytics, and Amazon Comprehend to power this approach and how it can be applied to other use cases such as real-time translation, PII detection, and redaction.

Patterns of Streaming Capabilities (Level 200)

11:30 AM – 12:00 AM PT | 2:30 PM – 3:00 PM ET

The use cases that lead us to use streaming capabilities largely fall into three broad patterns. From real time hydration of data lakes to running machine learning models on the streaming data, AWS Partner Infosys is on the front lines working with customers and will share the perspective on these patterns with representative case studies.

Wednesday 21st July

Put Out Fires Before They Become Wildfires: Find and Alert on Anomalies With ML Features in Amazon Elasticsearch Service (Level 200)

9:00 AM – 10:15 AM PT | 12:00 PM – 1:15 PM ET

Many organizations are using Amazon Elasticsearch Service as their centralized infrastructure for Operational Analytics, leveraging its near real-time analytics capabilities and visualizations to monitor and find issues in their critical infrastructure and applications. Coupling the power of search and ML, Amazon ES can alert users of anomalies and issues even faster, enabling them to head off problems before they become acute. In this session, you will learn:

1. Why Amazon ES is ideally suited for operational analytics.

2. What anomaly detection can do for you and why it is well suited to glean insight from vast datasets.

3. How to get started with anomaly detection in Amazon ES to identify and find problems faster in your log data.

Trace Analytics with Amazon Elasticsearch Service: Combine the Power of Log Analytics and Distributed Tracing in a Single Platform (Level 200)

10:15 AM – 11:00 AM PT | 1:15 PM – 2:00 PM ET

Traditional methods of collecting logs and metrics from individual components and services in a distributed application do not allow for end-to-end insights. With trace analytics in Amazon Elasticsearch Service, developers and IT Ops can easily troubleshoot performance and availability issues in distributed applications. In this session, we’ll discuss how this new feature works and allows for faster resolutions.

Analyzing Logs With Kinesis Data Firehose and Amazon Elasticsearch Service (Level 300)

11:00 AM – 12:00 PM PT | 2:00 PM – 3:00 PM ET

Using Amazon Elasticsearch Service for operational analytics has become an enterprise standard for many AWS customers. However, customers have asked for a fully managed approach to ingesting their logs into Amazon ES. In this session, you will learn how to use Kinesis Data Firehose to load your data into an Amazon ES endpoint in a VPC without having to create, operate, and scale your own ingestion and delivery infrastructure.

Thursday 22nd July

Deploy Lake House Architecture to Enable Self-Service Analytics with AWS Lake Formation (Level 300)

9:00 AM – 10:00 AM PT | 12:00 PM – 1:00 PM ET

Being data driven requires ubiquitous access to data in a secure and governed way. In this session you will learn how AWS Lake Formation makes it easy to build, manage, and secure your data lake. We will cover how to set up fine grained access permissions enabling secure access to data from a wide range of services. We will also cover how to update and delete data using Governed Tables and automatically optimize data for better query performance.

Accelerate Apache Spark and Other Big Data Application Development With EMR Studio (Level 300)

10:00 AM – 11:00 PM PT | 1:00 PM – 2:00 PM ET

Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Apache Spark and Presto. In April, we released EMR Studio, a new integrated development environment for data scientists to develop, visualize, and debug applications written in R, Python, Scala, and PySpark. Join this session to learn how to use EMR and EMR Studio to accelerate your big data work.

Simplify Infrastructure Management With Amazon EMR on Amazon EKS (Level 300)

11:00 AM – 12:00 PM PT | 2:00 PM – 3:00 PM ET

As Big Data and Analytics continue to grow, organizations are leveraging capabilities provided by the Containers and Kubernetes ecosystem to build their cutting-edge data platforms. AWS launched Amazon EMR on Amazon EKS to help customers focus more on developing the applications without worrying about operating the infrastructure. In this session, you’ll learn about the architecture design of Amazon EMR on Amazon EKS and see a live demo showing how to get started in minutes. AWS experts will also share best practices for setting up monitoring, logging, security, and how to optimize for cost.