AWS offers devs tiny autonomous vehicles for machine learning
Developers who use Amazon Sagemaker may want to dip their toes into reinforcement learning, because they'll get to test their theories out on a 1/18th scale fully autonomous racecar.
AWS DeepRacer is an all-wheel drive with monster truck tyres, an HD video camera and onboard video camera that will operate through reinforcement learning models.
In just a few lines of code, developers can use Amazon SageMaker to put their skills to the test, race cars and models against other developers and compete in the DeepRacer League – an autonomous racing league that is open to everyone. AWS DeepRacer is available for pre-order today.
But autonomous racing vehicles are just the tip of a huge range of machine learning services that AWS has announced this week.
Thirteen new machine learning capabilities and services across all layers in the machine learning stack are now available to developers.
"We want to help all of our customers embrace machine learning, no matter their size, budget, experience, or skill level," comments Amazon Machine learning vice president Swami Sivasubramanian.
"Today's announcements remove significant barriers to the successful adoption of machine learning, by reducing the cost of machine learning training and inference, introducing new SageMaker capabilities that make it easier for developers to build, train, and deploy machine learning models in the cloud and at the edge, and delivering new AI services based on our years of experience at Amazon.
Before developers embark on new projects, they need the power to run them.
Next week developers will be able to purchase the new Amazon Elastic Compute Cloud (EC2) GPU, which includes eight NVIDIA V100 GPUs, 32GB GPU memory, fast NVMe storage, 96 Intel "Skylake" vCPUs, and 100Gbps networking, the new P3dn.24xl instances are the most powerful machine learning training processors available in the cloud, allowing developers to train models with more data in less time.
• AWS-Optimized TensorFlow framework (generally available today): The AWS-Optimized TensorFlow distributes training across many GPUs to achieve close to linear scalability when training multiple types of neural networks (90% efficiency across 256 GPUs, compared to the prior norm of 65%).
Using the new AWS-Optimized TensorFlow and P3dn instances, developers can now train the popular ResNet-50 model in only 14 minutes, the fastest time recorded, and 50 percent faster than the previous best time. And, these optimisations are generally applicable not just for computer vision models but also for a broader set of deep learning models.
• Amazon Elastic Inference (generally available today): Amazon Elastic Inference allows developers to dramatically decrease inference costs with up to 75% savings when compared to the cost of using a dedicated GPU instance. Instead of running on a whole Amazon EC2 P2 or P3 instance with relatively low utilisation, developers can run on a smaller, general-purpose Amazon EC2 instance and provision just the right amount of GPU performance from Amazon Elastic Inference.
Starting at 1 TFLOP, developers can elastically increase or decrease the amount of inference performance, and only pay for what they use. Elastic Inference supports all popular frameworks, and is integrated with Amazon SageMaker and the Amazon EC2 Deep Learning Amazon Machine Image (AMI). And, developers can start using Amazon Elastic Inference without making any changes to their existing models.
• AWS Inferentia (available in 2019): AWS announced a high performance machine learning inference chip designed for larger workloads that take up entire GPUs or need lower latency.
AWS Inferentia provides hundreds of teraflops per chip and thousands of teraflops per Amazon EC2 instance for multiple frameworks (including TensorFlow, Apache MXNet, and PyTorch), and multiple data types (including INT-8 and mixed precision FP-16 and bfloat16).
AWS has also boosted its machine learning services. Amazon SageMaker, a fully managed service that removes guesswork from the machine learning process, now makes it easier for developers to build, train, tune, and deploy machine learning models.
• Amazon SageMaker Ground Truth (generally available today): This makes it easier for developers to label their data using human annotators through Mechanical Turk, third party vendors, or their own employees. Amazon SageMaker Ground Truth learns from these annotations in real time and can automatically apply labels to much of the remaining dataset, reducing the need for human review.
Amazon SageMaker Ground Truth creates highly accurate training data sets, saves time and complexity, and reduces costs by up to up to 70 percent when compared to human annotation.
• AWS Marketplace for Machine learning (generally available today): The new AWS Marketplace for Machine learning includes over 150 algorithms and models (with more coming every day) that can be deployed directly to Amazon SageMaker. Developers can start using these immediately from SageMaker. Adding a listing to the Marketplace is completely self-service for developers who want to sell through AWS Marketplace.
• Amazon SageMaker RL (generally available today): This is the cloud's first managed reinforcement learning service, allows any developer to build, train, and deploy with reinforcement learning through managed reinforcement learning algorithms, support for multiple frameworks (including Intel Coach and Ray RL), multiple simulation environments (including SimuLink and MatLab), and integration with AWS RoboMaker,
AWS's new robotics service, which provides a simulation platform that integrates well with SageMaker RL.
• Amazon SageMaker Neo (generally available today): This compiles models for specific hardware platforms, optimising their performance automatically, allowing them to run at up to twice the performance, without any loss in accuracy. As a result, developers no longer need to spend time hand tuning their trained models for each and every hardware platform (saving time and expense). SageMaker Neo supports hardware platforms from NVIDIA, Intel, Xilinx, Cadence, and Arm, and popular frameworks such as TensorFlow, Apache MXNet, and PyTorch. AWS will also make Neo available as an open source project.
Finally, for developers who want to build intelligent features into their applications but don't have machine learning experience, AWS has significantly expanded its AI services.
• Amazon Textract (available in preview today): This uses machine learning to instantly read virtually any type of document to accurately extract text and data without the need for any manual review or custom code. Amazon Textract allows developers to quickly automate document workflows, processing millions of document pages in a few hours.
• Amazon Comprehend Medical (generally available today): This is a highly accurate natural language processing service for medical text, which uses machine learning to extract disease conditions, medications, and treatment outcomes from patient notes, clinical trial reports, and other electronic health records. Comprehend Medical requires no machine learning expertise, no complicated rules to write, no models to train, and it is continuously improving. You pay only for what you use and there are no minimum fees or upfront commitments.
• Amazon Personalize (available in preview today): Based on the same technology that powers Amazon.com, Amazon Personalize is a real-time recommendation and personalisation service. Experience has shown that there is no master algorithm for personalisation. Each use case, from videos, music, products, news articles, has its own specificities, which require a unique mix of data, algorithms, and optimisations.
Amazon Personalize provides this experience to customers in a fully managed service, which will build, train, and deploy custom, private personalisation and recommendation models for virtually any use case. Amazon Personalize can make recommendations, personalise search results, and segment customers for direct and personalised marketing through email or push notifications.
• Amazon Forecast (available in preview today): Like Amazon Personalize, Amazon Forecast is based on technology that has been developed by Amazon.com and used for a lot of critical forecasting. Forecasting is hard to do well because there are often so many inter-related factors (such as pricing, events, and even the weather).
Missing the mark with a forecast can have a significant impact, such as being unable to meet customer demand or significantly over-spending. Amazon Forecast creates accurate time-series forecasts. Using historical data and related causal data, Amazon Forecast will automatically train, tune, and deploy custom, private machine learning forecasting models, so that customers can be more confident that they'll provide the right customer experience while optimising their spend.
Customers using these new services and capabilities include Adobe, BMW, Cathay Pacific, Dow Jones, Expedia, Formula 1, GE Healthcare, HERE, Intuit, Johnson - Johnson, Kia Motors, Lionbridge, Major League Baseball, NASA JPL, Politico.eu, Ryanair, Shell, Tinder, United Nations, Vonage, the World Bank, and Zillow.