AWS re:Invent 2020 Keynote by Andy Jassy

AWS re:Invent is always exciting and inspiring. Because of the pandemic, this year’s re:Invent is 100% online. I watched the keynote speech and took a memo. There is no mind-blowing new services this year (like the equivalent of announcing Lambda or SageMaker). It feels like they are building more capability on the existing services, focusing on IoT, storage, microservices, ML and AI technologies.

Main takeaways

  • New container and serverless technology announcements.
  • Database technologies and analytics.
  • New IoT services.
  • Lighter on ML/AI technologies than previous few years.
  • AWS is really getting into real engineering and IoT services – product developments, material production, supply chain management etc.
  • Bagging Oracle and Microsoft continues when it comes to databases.
  • Heaps of updates on an existing services.
  • Improvement on Amazon Connect.

New Services announced

I am missing a few services probably. But, these are the ones I managed to take notes.

  • AWS Lambda container image support
  • Amazon EKS distro
  • AWS Proton: manage and deploy container microservices
  • io2 Block express: the highest IOPS and throughput in the cloud
  • Babelfish

Run SQL Server applications on Aurora PostgreSQL with little to no code change
New translation capability to easily run SQL server applications on Aurora PostgreSQL
Understand SQL server’s proprietary dialect (T-SQL) and communications protocol (TDS)
Migrate the data with DMS, then update your application to point to Aurora instead of SQL server
Open source project (uses the apache 2.0 license)

  • AWS Glue Elastic Views

Easily build materialized views that automatically combine and replicate data across multiple data stores
Write a little bit of SQL, copy data from each source data, store to a target data store, then create a materialised view
It will automatically update the data in the view
Target store is Redshift, Elasticsearch, S3, DynamoDB, Aurora, RDS

  • Amazon SageMaker Data Wrangler

Fastest way to prepare data for ML
Aggregate and prepare ML features – easy feature engineering
Imports and inspects data to identify the data types
Recommends transformations based on the data in the dataset
Applies transformations for features – can see a preview in real time
Make available for inference in real time
Need to ensure consistency of data for both training and inference

  • Amazon SageMake Feature Store

A new repository that makes it easy to store, update and share ML features
Purpose-build and accessible from SageMaker Studio
Easily name, organise, find and share features
Access features in batches or subsets
Low latency for inference

  • Amazon SageMaker Pipelines

The first purpose-build, easy-to-use CI/CD services for ML.

  • Amazon Connect Wisdom

A new capability that uses ML to deliver agents the product and service info they need to solve issues in real time.

  • Amazon Monitron

End to end equipment monitoring system to enable predictive maintenance.
Amazon Lookout for Equipment.
Predictive maintenance with Amazon lookout for equipment.

  • AWS Panorama appliance

New hardware appliance – use existing onsite camera to add computer vision.

Business guests

Some of the guests appeared in the keynote speech.

  • Boom – high performance aircraft

Established by Blake Scholl who was an engineer with amazon. As he started working at amazon after graduating uni, he started flight lessons. At the time, Amazon was building AWS
Supersonic airliner, twice as fast. Tokyo to Seattle 4:30h, New York to London 3:30h
Partner with Japan Airline Rolls Royce, US defence force
All-in with AWS
Design and test plane on cloud – used AWS services to simulate flight patterns. AWS enables smaller companies to design planes better. Using AI/ML to improve simulations. Can run simulation in parallel
100% carbon neutral
Fastest and affordable. Making supersonic travel affordable

  • Carrie

Supply chain company for perishable food and pharma products
Because a cool chain is hard to maintain, 1/3 of food gets wasted. $1T worth, 1 in 9 people go to bed hungry
$35 billion lost for biopharma.

Git
How to specify which Node version to use in Github Actions

When you want to specify which Node version to use in your Github Actions, you can use actions/setup-node@v2. The alternative way is to use a node container. When you try to use a publicly available node container like runs-on: node:alpine-xx, the pipeline gets stuck in a queue. runs-on is not …

AWS
Using semantic-release with AWS CodePipeline and CodeBuild

Here is the usual pattern of getting the source from a git repository in AWS CodePipeline. In the pipeline, we use AWS CodeStart to connect to a repo and get the source. Then, we pass it to the other stages, like deploy or publish. For some unknown reasons, CodePipeline downloads …

DBA
mysqldump Error: Unknown table ‘COLUMN_STATISTICS’ in information_schema (1109)

mysqldump 8 enabled a new flag called columm-statistics by default. When you have MySQL client above 8 and try to run mysqldump on older MySQL versions, you will get the error below. mysqldump: Couldn’t execute ‘SELECT COLUMN_NAME, JSON_EXTRACT(HISTOGRAM ‘$”number-of-buckets-specified”‘) FROM information_schema.COLUMN_STATISTICS WHERE SCHEMA_NAME = ‘myschema’ AND TABLE_NAME = ‘craue_config_setting’;’: Unknown …