Author: Blair Hanley Frank

0

Here’s how Google is preparing Android for the AI-laden future

The future of Android will be a lot smarter, thanks to new programming tools that Google unveiled on Wednesday. The company announced TensorFlow Lite, a version of its machine learning framework that’s designed to run on smartphones and other mobile devices, during the keynote address at its Google I/O developer conference.

“TensorFlow Lite will leverage a new neural network API to tap into silicon-specific accelerators, and over time we expect to see [digital signal processing chips] specifically designed for neural network inference and training,” said Dave Burke, Google’s vice president of engineering for Android. “We think these new capabilities will help power a next generation of on-device speech processing, visual search, augmented reality, and more.”

The Lite framework will be made a part of the open source TensorFlow project soon, and the neural network API will come to the next major release of Android later this year.

The framework has serious implications for what Google sees as the future of mobile hardware. AI-focused chips could make it possible for smartphones to handle more advanced machine learning computations without consuming as much power. With more applications using machine learning to provide intelligent experiences, making that sort of work more easily possible on device is key.

Right now, building advanced machine learning into applications — especially when it comes to training models — requires an amount of computational power that typically requires beefy hardware, a lot of time and a lot of power. That’s not really practical for consumer smartphone applications, which means they often offload that processing to massive datacenter by sending images, text and other data in need of processing over the internet.

Processing that data in the cloud comes with several downsides, according to Patrick Moorhead, principal analyst at Moor Insights and Strategy: Users must be willing to transfer their data to a company’s servers, and they have to be in an environment with rich enough connectivity to make sure the operation is low-latency.

There’s already one mobile processor with a machine learning-specific DSP on the market today. The Qualcomm Snapdragon 835 system-on-a-chip sports the Hexagon DSP that supports TensorFlow. DSPs are also used for providing functionality like recognizing the “OK, Google” wake phrase for the Google Assistant, according to Moorhead.

Users should expect to see more machine learning acceleration chips in the future, Moorhead said. “Ever since Moore’s Law slowed down, it’s been a heterogeneous computing model,” he said. “We’re using different kinds of processors to do different types of things, whether it’s a DSP, whether it’s a [field-programmable gate array], or whether it’s a CPU. It’s almost like we’re using the right golf club for the right hole.”

Google is already investing in ML-specific hardware with its line of Tensor Processing Unit chips, which are designed to accelerate both the training of new machine learning algorithms as well as data processing using existing models. On Wednesday, the company announced the second version of that hardware[2], which is designed to accelerate machine learning training and inference.

The company is also not the only one with a smartphone-focused machine learning framework. Facebook showed off a mobile-oriented ML framework called Caffe2Go last year[3], which is used to power applications like the company’s live style transfer feature.

To comment on this article and other PCWorld content, visit our Facebook[4] page or our Twitter[5] feed.

References

  1. ^ [ Further reading: The best Android phones for every budget. ] (www.greenbot.com)
  2. ^ second version of that hardware (www.infoworld.com)
  3. ^ last year (www.pcworld.idg.com.au)
  4. ^ Facebook (www.facebook.com)
  5. ^ Twitter (twitter.com)
0

Google adds smart reply to Gmail for iOS, Android

Google is making it easier for people to dash off a quick email reply from Gmail on their smartphones. The Smart Reply feature, which offers a handful of contextually-aware, computer-generated responses, is coming to Google’s flagship email app for iOS and Android, the company announced at its I/O developer conference Wednesday.

The feature provides users with three machine-generated responses, based on the content of whatever message the user is replying to. It’s built using machine learning, and is designed for use with smartphones, so that people on the go can dash off a reply to their correspondence partners without much effort.

Smart Reply began its life as part of Inbox, Google’s alternate email client for smartphones. Right now, 12 percent of all email replies sent through that app are Smart Replies.

It’s also a feature that showcases Google’s machine learning chops, since that’s what’s powering the system behind Smart Reply.

At first, Smart Reply will be available globally in English for users of the Gmail app for iOS and Android. In the coming weeks, Google will also make it available for users who write in Spanish, with other languages coming after that.

Wednesday’s announcement is one of many expected from Google’s I/O developer conference in Mountain View, California. One of the key themes this year is the tech titan’s embrace of machine learning. Google CEO Sundar Pichai also announced a new Tensor Processing Unit that’s designed to speed up computation for machine intelligence training and inference.

To comment on this article and other PCWorld content, visit our Facebook[1] page or our Twitter[2] feed.

References

  1. ^ Facebook (www.facebook.com)
  2. ^ Twitter (twitter.com)
0

Google’s new TPUs are here to accelerate AI training

Google has made another leap forward in the realm of machine learning hardware. The tech giant has begun deploying the second version of its Tensor Processing Unit, a specialized chip meant to accelerate machine learning applications, company CEO Sundar Pichai announced on Wednesday.

The new Cloud TPU sports several improvements over its predecessor. Most notably, it supports training machine learning algorithms in addition to processing the results from existing models. Each chip can provide 180 teraflops of processing for those tasks. Google is also able to network the chips together in sets of what are called TPU Pods that allow even greater computational gains.

Businesses will be able to use the new chips through Google’s Cloud Platform, though the company hasn’t provided exact details on what form those services will take. In addition, the company is launching a new TensorFlow Research Cloud that will provide researchers with free access to that hardware if they pledge to publicly release the results of their research.

It’s a move that has the potential to drastically accelerate machine learning. Google says its latest machine translation model takes a full day to train on 32 of the highest-powered modern GPUs, while an eighth of a TPU Pod can do the same task in an afternoon.

Machine learning has become increasingly important for powering the next generation of applications. Accelerating the creation of new models means that it’s easier for companies like Google to experiment with different approaches to find the best ones for particular applications.

Google’s new hardware can also serve to attract new customers to its cloud platform, at a time when the company is competing against Microsoft, Amazon, and other tech titans. The Cloud TPU announcement comes a year after Google first unveiled the Tensor Processing Unit[1] at its I/O developer conference. 

Programming algorithms that run on TPUs will require the use of TensorFlow, the open source machine learning framework that originated at Google. TensorFlow 1.2 includes new high-level APIs that make it easier to take systems built to run on CPUs and GPUs and also run them on TPUs. Makers of other machine learning frameworks like Caffe can make their tools work with TPUs by designing them to call TensorFlow APIs, according to Google Senior Fellow Jeff Dean.

Dean wouldn’t elaborate on any concrete performance metrics of the Cloud TPU, beyond the chips’ potential teraflops. One of the things that a recent Google research paper[2] pointed out is that different algorithms perform differently on the original TPU, and it’s unclear if the Cloud TPU behaves in a similar manner.

Google isn’t the only company investing in hardware to help with machine learning. Microsoft is deploying field-programmable gate arrays in its data centers to help accelerate its intelligent applications.

To comment on this article and other PCWorld content, visit our Facebook[3] page or our Twitter[4] feed.

References

  1. ^ first unveiled the Tensor Processing Unit (www.cio.com)
  2. ^ recent Google research paper (www.pcworld.com)
  3. ^ Facebook (www.facebook.com)
  4. ^ Twitter (twitter.com)
0

Microsoft’s new tools help devs manage cloud deployments on the go

Microsoft is making it easier for developers to manage their cloud deployments on the go, using a new mobile app and browser-based command line.

On Wednesday, the company unveiled Azure Cloud Shell[1], which lets developers spin up a full-fledged terminal environment inside Microsoft’s cloud and comes with a set of preconfigured tools for managing deployments. Each user will have persistent file storage in their Cloud Shell, hosted in Microsoft Azure.

Cloud Shells are accessible through the Microsoft Azure web portal, as well as the Azure mobile app for iOS and Android, which was just released Wednesday. That app also provides users with the ability to monitor the workloads they have running in Microsoft’s public cloud and perform basic management like stopping and restarting virtual machines.

Those two tools allow developers and administrators to get more work done, even if they’re not at their personal computer. The mobile app, in particular, means that an administrator could fix an issue with their Azure deployment while out at dinner.

Azure Cloud Shell is similar to Google’s Cloud Shell product, which was released last year[2] and also offers developers on Google Cloud Platform the ability to access a Linux terminal environment with persistent storage from any web browser.

One of the key benefits with Cloud Shell is that the Azure command line tools are preconfigured for users, so they can write commands in the cloud without having to set up their environment.

Right now, developers can use the popular bash Linux shell with Cloud Shell, and Microsoft’s PowerShell will be coming to the service soon.

To comment on this article and other PCWorld content, visit our Facebook[3] page or our Twitter[4] feed.

References

  1. ^ Azure Cloud Shell (azure.microsoft.com)
  2. ^ released last year (cloudplatform.googleblog.com)
  3. ^ Facebook (www.facebook.com)
  4. ^ Twitter (twitter.com)
0

Microsoft brings customization to its pre-built AI services

Microsoft is doubling down on its cloud AI services for business customers with a fleet of new offerings aimed at helping companies deal with video and unique problems not solved by its off-the-shelf cognitive services.

New services announced Wednesday include a new Video Indexer service that will provide customers with automated captioning, sentiment analysis, custom face recognition, object detection, optical character recognition and keyword extraction of videos they provide. The tool is built on existing Microsoft services, but gives customers an easier way to process large amounts of video for indexing and analysis rather than require manual work by humans.

Also new is a custom image recognition service that allows users to take Microsoft’s existing tools for detecting objects and teach them to recognize other things that aren’t generally applicable. For example, manufacturers could use the service to identify different types of parts that Microsoft’s off-the-shelf image recognition service couldn’t recognize, according to Irving Kwong, a principal product manager in the company’s artificial intelligence group.

Other new offerings in that vein include the Bing Custom Search Service, which lets companies embed custom web search on their sites; a Gesture service that’s designed to help businesses build gesture-recognition tools; and the Custom Decision Service, which is designed to automate choices between different content.

All of these new functions are aimed at helping non-experts use machine-learning capabilities to improve their businesses. They build on an existing suite of Microsoft Cognitive Services that include image recognition, language understanding and other capabilities.

The idea of customizable but easy-to-use tools for solving problems like image recognition isn’t a new one in the tech industry. Salesforce has its own offering for image recognition, while Rekognition from Amazon Web Services is used to power face detection of politicians in CSPAN footage.

The Video Indexer should help organizations more easily understand the content of videos that they own. When users upload a file, the service takes a few minutes to process it, and then will provide information like a transcript of the footage, faces present, a graph of sentiment throughout the video and a set of keywords extracted from the content.

The facial recognition feature is particularly interesting, since it will automatically detect particular celebrities (a demo showed it automatically identifying Microsoft Executive Vice President Scott Guthrie) and also remember user-defined individuals across multiple videos.

Because the Indexer automatically transcribes the contents of a video, users can then use that to create translated captions, based on Microsoft’s machine translation capabilities.

Microsoft’s Bing Custom Search service comes at exactly the right time for enterprises. Google announced that its Site Search service will no longer work after April 1 of next year, which means businesses that previously relied on it are looking for other options.

Bing Custom Search lets businesses build domain-specific search engines that don’t carry Microsoft branding and which can be embedded on their websites. It’s a move by the company to take what it’s learned from running a massive, public search engine and apply it to the enterprise.

The Custom Decision Service is somewhat difficult to explain. It’s designed to take in a set of choices and provide users with the ones it thinks will work best, based on a technique called reinforcement learning that’s designed to teach machines how to optimize for certain behaviors.

Microsoft uses the technology behind the Custom Decision Service to do things like determine which ads to show on Bing and what content to display on MSN.com. Users will be able to do the same.

All of this news comes as part of Microsoft’s Build developer conference, which is taking place in Seattle this week.

To comment on this article and other PCWorld content, visit our Facebook[1] page or our Twitter[2] feed.

References

  1. ^ Facebook (www.facebook.com)
  2. ^ Twitter (twitter.com)
       
Apps & Games Clothing Electronics & Photo Large Appliances
Baby Womens Apparel Garden Lighting
Beauty Mens Apparel Outdoors Luggage
Books Girls Apparel Health & Personal Care Pet Supplies
Car Boys Apparel Home Shoes & Bags
Motorbike Computers & Accessories Kitchen Equipment Sports & Outdoors
Fashion DIY & Tools Jewellery Toys & Games