By Jeff Barr | on 28 MAR 2017 | in Amazon Aurora | Permalink | | Share Here’s are the newest additions to Aurora : You can use this to build multi-region, highly available systems or to move the data closer to the user.
The destination region must include a DB Subnet Group that encompasses 2 or more Availability Zones. These economical instances are a great fit for dev & test environments and for light production workloads.
Simplified Security We want to make it as easy and simple as possible for you to use encryption to protect your data, whether it is at rest or in motion. Database Engine Updates The open source community and the vendors of commercial databases add features and produce new releases at a rapid pace, and we track their work very closely, aiming to update RDS as quickly as possible after each significant release.
By Jeff Barr | on 18 JAN 2017 | in Amazon Aurora, Amazon RDS | Permalink | Share Migrating from one database engine to another can be tricky when the database is supporting an application or a website that is running 24×7. Without the option to take the database offline, an approach that is based on replication is generally the best solution.
Today we are launching a new feature that allows you to migrate from an Amazon RDS DB Instance for MySQL to Amazon Aurora by creating an Aurora Read Replica. After the replica has been set up, replication is used to bring it up to date with respect to the source.
Today we are launching two features that were announced at AWS re:Invent : spatial indexing and zero-downtime patching. Spatial Indexing Amazon Aurora already allows you to use the GEOMETRY data type to represent points and areas on a sphere.
You can create columns of this type and then use functions such as ST_Contains, ST_Crosses, and ST_Distance (and many others) to perform spatial queries. These queries are powerful, but can be inefficient to process at scale, limiting their usefulness for large data sets.
In order to allow you to build large-scale, location-aware applications using Aurora, you can now create a specialized, highly-efficient index on your spatial data. Aurora uses a dimensionally ordered space-filling curve (for you mathematical types) to make your retrievals fast, accurate, and scalable.
The index uses a b-tree and delivers performance that is up to two orders of magnitude better than MySQL 5.7 (watch this segment of the Amazon Aurora Deep Dive video or review the presentation for details). Although it is possible to maintain high availability by using read replicas and promotion, there’s always room to do better.
Our new zero-downtime patching feature allows Aurora instances to be updated in place, with no downtime and no effect on availability. The patching mechanism pauses while waiting for open SSL connections, active locks, pending transactions, and temporary tables to clear up.
Application sessions are preserved and the database engine restarts while the patch is in progress, leading to a transient (5 second or so) drop in throughput. If no suitable time window becomes available, patching reverts to the standard behavior.
To learn more about how this works and how we implemented it, watch this segment of the Amazon Aurora Deep Dive video. By Jeff Barr | on 30 NOV 2016 | in Amazon Aurora, AWS re:Invent, Launch | Permalink | Share The feedback that we have received from our customers since then has been heart-warming.
Customers love the MySQL compatibility, the focus on high availability, and the built-in encryption. They count on the fact that Aurora is built around fault-tolerant, self-healing storage that allows them to scale from 10 GB all the way up to 64 TB without pre-provisioning.
This open source database has been under continuous development for 20 years and has found a home in many enterprises and startups. Customers like the enterprise features (similar to those offered by SQL Server and Oracle), performance benefits, and the geospatial objects associated with PostgreSQL.
They enabled Was compression and aggressive autovacuum, both of which improve the performance of PostgreSQL on the workloads that they tested. David & Grant ran the standard PostgreSQL pg bench benchmarking tool.
All data point ran for one hour, with the database recreated before each run. David and Grant are now collecting data for a more detailed post that they plan to publish in early 2017.
By Jeff Barr | on 23 NOV 2016 | in Amazon Aurora, Launch | Permalink | Share Amazon Aurora already allows you to make your choice of five DB instance classes ranging from the db.r3.large (2 CPUs and 15 GiB of RAM) up to the db.r3.8xlarge (32 CPUs and 244 GiB of RAM). Today we giving you a sixth choice, the new db.t2.medium DB instance class with 2 CPUs and 4 GiB of RAM.
You can monitor the CPUCreditUsage and CPUCreditBalance metrics to track the usage and accumulation of credits over time. On the other hand, opportunities to make the services work together are ever-present, and we have a number of them on our customer-driven roadmap.
For example, because Amazon Aurora is compatible with MySQL, it supports triggers on the INSERT, UPDATE, and DELETE operations. Stored procedures are scripts that can be run in response to the activation of a trigger.
This procedure, as the name implies, invokes your desired Lambda function asynchronously, and does not wait for it to complete before proceeding. As usual, you will need to give your Lambda function permission to access any desired AWS services or resources.
The data can be located in any AWS region that is accessible from your Amazon Aurora cluster and can be in text or XML form. This helps to spread the read workload around and can lead to better performance and more equitable use of the resources available to each replica.
You can then reconnect to the reader endpoint in order to send your read queries to the other replicas in the cluster. In the unlikely event that an Availability Zone fails, applications that make use of the new endpoint will continue to send read traffic to the other replicas with minimal disruption.
With MySQL compatibility “on top” and the unique, cloud-native Aurora architecture underneath, we have a lot of room to innovate. Parallel Read Ahead The InnoDB storage engine used by MySQL organizes table rows and the underlying storage (disk pages) using the index keys.
However, as rows are updated, inserted, and deleted over time, the storage becomes fragmented, the pages are no longer physically sequential, and scans can slow down dramatically. InnoDB’s Linear Read Ahead feature attempts to deal with this fragmentation by bringing up to 64 pages in to memory before they are actually needed.
With today’s update, Aurora is now a lot smarter about handling this very common situation. When Aurora scans a table, it logically (as opposed to physically) identifies and then performs a parallel prefetch of the additional pages.
The parallel prefetch takes advantage of Aurora ’s replicated storage architecture (two copies in each of three Availability Zones) and helps to ensure that the pages in the database cache are relevant to the scan operation. NUMA-Aware Scheduling The largest DB Instance (db.r3.8xlarge) has two CPU chips and a feature commonly known as NUMA, short for Non-Uniform Memory Access.
On systems of this type, each an equal fraction of main memory is directly and efficiently accessible to each CPU. Aurora now does a better job of scheduling threads across the CPUs in order to take advantage of this disparity in access times.
The threads no longer need to fight against each other for access to the less-efficient memory attached to the other CPUs. The performance improvement will be most apparent when you are making hundreds or thousands of connections to the same database instance.