What are the differences between the three giants of Google, Intel, and Microsoft?

[What are the differences between the AI ​​conferences of Google, Intel, and Microsoft? 】 This May is not ordinary, Google, Intel, Microsoft's three giants have chosen to hold their own AI conference this month to show their AI strength and strategic layout.

As Google's leader in the AI ​​field, Google demonstrated the four directions of the AI ​​strategy at the I/O Conference, demonstrating the determination to fully bet AI.

Direction 1: AI technology is introduced into Google applications. The core product has been prioritized for the past year. We have actually felt that Google's translation into AI technology is getting closer to the real person level. At this year's I/O conference, Google demonstrated how Gboard (Google Input Method), Gmail, and Google Album for AI technology blessings have changed. In the future, Google will surely promote AI technology to all Google applications. Because Google will develop different AI technologies for different applications, the core product will have the priority of AI import.

Direction II: AI technology is introduced into the Android system, so that AI becomes the smart machine standard Google's next generation Android system AndroidP released at the I/O conference, which incorporates AI technology and will be available this summer. In addition, mobile phone processor manufacturers are equipped with AI functions for mid-to-high-end processors, and both bring AI to become a smart machine standard with dual protection.

Three directions: Adhering to self-developed AI chips and building a cloud platform ecosystem I/O conference, TPU 3.0 was introduced. Compared to the 2.0 version, the performance was improved by 8 times and the speed was up to 100 petaflops.

Google is struggling to expand its cloud platform business and compete fiercely with Amazon AWS and Microsoft Azure. Building a strong AI chip is seen as a winning factor. Google hopes to use TPU's AI capabilities to attract users into Google's ecosystem.

Direction four: Let all Google software and hardware products have Google assistant blessing Google's development vision is that software and hardware products have Google assistant blessing. The I/O Conference demonstrated the Google assistant's reservation of meals, etc., to prove that it can truly become an assistant to life.

At Microsoft's Build Developer Conference, Microsoft released a preview of ProjectBrainwave, a hardware architecture that accelerates real-time AI calculations and integrates it into Azure Machine Learning Services. ProjectBrainwave is deployed on Intel FPGAs.

According to industry sources, Microsoft believes that machine learning is rapidly evolving. It may not be wise to burn current algorithms into chips. This may soon become obsolete. With programmable FPGA chips, the latest algorithms can be imported at any time to implement AI functions. In this way, Microsoft does not need to develop its own server design chip to directly purchase FPGAs from Intel and implement AI acceleration and other functions through software programming.

At the 2018 artificial intelligence conference held in Beijing, Microsoft focused on its new "worldview", namely, smart cloud and smart edge.

The Azure ecosystem consisting of public cloud Azure, hybrid cloud AzureStack, IoT Azure IoT Edge and AzureSphere integrates intelligent cloud and intelligent edge.

Therefore, Microsoft's AI hardware strategy is to bet on FPGAs, smart cloud and intelligent edge.

Intel Intel as a hardware manufacturer, AI layout focuses on hardware. At the first AI Developer Conference AIDC2018, the release of the new cloud AI chip NNP (neural network processor) release, code-named "SpringCrest", it is reported that its power consumption will be less than 210 watts, compared to the previous generation LakeCrest training in 3 -4x performance increase, the chip can provide hardware support for cloud training.

In addition to the cloud's AI capabilities, Intel also highlighted the Movidius neural computing stick launched last year at the developer conference. With integrated DNN accelerators, edge inference can achieve more than 1 trillion operations per second.

Naveen Rao, vice president of Intel and general manager of the AI ​​product team, tried to position Intel and its flagship server processor Xeon as leaders and product lines in the field of neural network algorithm training and inference. It is reported that Intel will extend the neural network bfloat16 digital format to Intel Xeon processors and FPGAs. Rao pointed out that this is one of Intel's coherent and comprehensive strategy, allowing its own chip portfolio to have AI training capabilities.

The Big Three's hardware strategy comparison For Google, Microsoft and Intel, we chose to compete for the AI ​​cloud market to compare.

First come to popular science, the two main phases of neural networks are training and reasoning. In the training process, it is usually required to train a complex deep neural network model through a large amount of data input or adopting unsupervised learning methods such as enhanced learning. Therefore, the training session can only be implemented in the cloud, and currently there are currently GPUs, ASICs (Google TPU 1.0/2.0), etc. that are capable of this job. Therefore, the true test of AI ability is still the ability to train in the cloud, and the reasoning ability of the device side is a lot of “pediatrics”. Of course, the cloud can also perform computational reasoning, and the FPGA is used as a key chip for cloud computing acceleration.

In terms of cloud inference, Google promoted the development of the TPU and the introduction of the industry. Intel's launch of the NNP chip with the acquisition of Nervana Systems changed the backwardness. Carey Kloss, vice president of hardware at Intel’s AI product group, said that SpringCrest can benchmark third-generation TPU (TPU3.0) products. Therefore, the confrontation between Google and Intel will become more and more exciting.

Microsoft is betting on FPGAs to implement cloud services through software implementations. Project Brainwave, a Microsoft-based image recognition cloud service from Microsoft, was adopted by Nestlé and Jabil. Microsoft said that customers who use Brainwave can use the standard image recognition model. The single image processing takes only 1.8 milliseconds. However, industry insiders question the suitability of Brainwave because FPGAs are not widely used in cloud computing.

Google's self-developed TPU has a full sense of topic. Apart from improving its own AI level, it also hopes to obtain corporate customers; Intel will use the acquisition method to fill in the AI ​​core link to lay the foundation for winning the market; Microsoft will give up its self-developed AI core and bet on it. Note on FPGAs with a little bit of caution.

Wireless Handset

Gaming Headset,Wireless Handset,Bluetooth Wireless Handset,Wireless Headphone Headset

Guangzhou YISON Electron Technology Co., Limited , https://www.yisonearphone.com

This entry was posted in on