• Technology
  • Electrical equipment
  • Material Industry
  • Digital life
  • Privacy Policy
  • O name
Location: Home / Technology / AI Is Garnering a Bigger Role in Intel’s Future

AI Is Garnering a Bigger Role in Intel’s Future

techserving |
1312
May 19, 2022 by Agam Shah

Intel is moving the focus from its server CPUs to a growing roster of adjacent chips that are driving a fundamental shift in computing to AI in which answers are derived from associations and patterns found in data.

Accelerators such as GPUs and AI chips took the spotlight at Intel’s recent Vision event near Dallas. Intel is also shifting chip design to be modular so AI accelerators can be tightly packaged alongside its Xeon CPUs.

CEO Pat Gelsinger listed AI as a cornerstone driving the company's future product line. He invoked former CEO Andy Grove when predicting that AI – which needs higher performance levels of compute – will be a key inflection point driving Intel's strategic decisions.

Computing has evolved since the introduction of the 4004 – which was Intel's first CPU to be sold in 1971 – and is now available at the fingertips through the cloud and edge, with machine learning and AI providing more intelligent insights, Gelsinger said.

"We see this explosion of use cases. If you are not applying AI to every one of your business processes, you are falling behind," Gelsinger said. "We need to make sure humans take advantage of AI, but also humans need to make sure AI is better and ethical as well."

Intel's Habana Labs Gaudi2 mezzanine card

The company is relying heavily on input from Google Cloud, Amazon Web Services and Microsoft’s Azure to drive its AI hardware and software strategy. The chipmaker announced the Gaudi-2 AI chip, and new infrastructure processing units (IPUs) designed with cloud providers like Google and Microsoft. The Gaudi-2 chip is based on standards like Ethernet, which makes it easier to deploy in infrastructures.

Intel added the Gaudi AI chip lineup through its 2019 acquisition of Habana Labs. The first-generation Gaudi chips are now available through instances on Amazon AWS, and that relationship gave Intel cues on how to design Gaudi-2 to support hyperscaler workloads, security and scalability requirements.

“We learned a lot from the engagement with Amazon,” said Eitan Medina, chief operating officer at Habana Labs, during a press conference at Vision.

Intel was also clear that it could not rely simply on selling chips, but also needed a software strategy to overlay its hardware offerings.

"Intel has been trying to unify software over the many hardware platforms. It still is evolving," said Kevin Krewell, an analyst at Tirias Research.

The chipmaker announced Project Amber, a new service that creates a secure bubble in which customers can safely run AI models without worrying about data leaking to unauthorized parties. The technology authenticates all connection points, and it will be offered as a verification service in single- or multi-cloud services to protect data.

Project Amber requires Intel's hardware and software services to work closely together, and the technology will allow companies to run machine learning models in a secure and trusted cloud environment, said Greg Lavender, Intel's chief technology officer, in a keynote.

"The cost of developing AI models can range anywhere from $10,000 up to $10 million. Protecting that intellectual property is a high priority for those users and applications," Lavender said.

Lavender went on to talk about OpenVINO, an AI inferencing toolkit, being used with the SGX (Software Guard Extensions) and other technologies to secure AI on edge. SGX provides an additional protection layer so unauthorized parties do not have access to data.

Intel also provided examples of how its AI software and hardware help companies keep up with regulatory requirements. Intel announced a partnership with BeeKeeperAI for healthcare providers to run machine learning on the edge, which could often fall outside trusted environments. The joint offering, which is on Microsoft’s Azure cloud, helps health care providers comply with regulatory requirements of data privacy.

Intel's SGX technology allowed Bosch USA – which develops technologies for autonomous cars – to deploy training models in a private environment. The AI models use real-world data and machine-generated synthetic data while obscuring information like facial data. The company also deployed AI models for safety-critical systems in autonomous driving, which also have regulatory requirements, said Tim Frasier, president for cross-domain computing solution at Bosch, who spoke on stage at the conference.

Intel Artic Sound-M GPU

AI Is Garnering a Bigger Role in Intel’s Future

Intel also announced the Arctic Sound-M graphics processor, which is designed for implementation in data centers for AI, videos streams and cloud gaming.

The GPU can run 150 trillion operations per second for video and AI processing. "so while doing that streaming you could do AI to understand what's in the video," said Raja Koduri, executive vice president and general manager of the Accelerated Computing Systems and Graphics Group at Intel, at the conference.

Video is consuming a lot of internet traffic, but is also being used for applications such as analyzing data captured from cameras.

"We are running more AI analytics on video streams as well," Koduri said, adding "These new use cases demand new hardware acceleration because they are real time with AI."

The GPU will be available in two configurations: a 150-watt model with 32 Xe cores, and a 75-watt model with 16 Xe cores. The GPUs have Xe Matrix Extensions (XMX) for AI acceleration.

Arctic Sound-M supports a software development platform called OneAPI, which supports a wide range of AI programming frameworks that include TensorFlow and Caffe.

OneAPI is a key ingredient for Intel to succeed in AI, Tirias Research’s Krewell said, adding, that “Nvidia CUDA is still the gold standard for vendor software stacks.”

The new AI chips are critical to Intel's future as it tries to catch up with Nvidia, which has a lead in AI processing. To accommodate new accelerators, Intel is taking a modular approach to chip design, in which the company can package a range of homegrown GPUs, ASICs or FPGAs alongside Xeon chips.

"The first thing that is needed is a modular approach, because different AI solutions are needed," said Bob Brennan, vice president and general manager for Intel's Foundry Services, in a breakout session at Vision.

Brennan is leading an effort to make Intel chips diverse by bringing in support for AI accelerators based on RISC-V or Arm architectures. The company already offers FPGAs for AI applications, and is working on neuromorphic chips that are inspired by the way human brain functions.

Intel already has such a modular chip codenamed Ponte Vecchio, an accelerator that integrates graphics cores, vector processors, I/O, networking, matrix engines and other processing cores in a single package. The company will share more details about the chip at the upcoming ISC High-Performance Computing conference starting later this month.

"Modularity starts with your architecture. When you visualize your computer architecture as to how you're going to construct your SoC, you have to think about potential partitioning," Brennan said.

Intel’s AI hardware strategy is also tied to standard interfaces.

"If you compare the Gaudi architecture, we took the commitment to use Ethernet because that's the most widely used interface that will allow customers to scale out using a standard interface rather than proprietary one," Habana Labs’ Medina said.

Intel is also supporting the UCIe (Universal Chiplet Interconnect Express) interface inside chip package to connect partitioned AI accelerators, CPUs and other co-processors.

Intel last year created a new business group called Accelerated Computing Systems and Graphics Group, which is led by Koduri, to focus on GPUs, accelerators and AI chips. Intel’s Xeon chips still dominate the data center infrastructure (with an 85 percent share by Intel's estimate), providing a massive installed base on which the company hopes to sell its AI chips.

But the company has had its share of struggles with AI products. The company purchased AI chip startup Nervana in 2016, but discontinued the product line in early 2020, shortly after Gelsinger became the CEO. Gelsinger has reset Intel's operations with a renewed focus on manufacturing, engineering and research and development.

Gelsinger acknowledged that Intel is still in the process of sorting out its multiple AI offerings, use cases and customer requirements.

"We have Arctic Sound coming as well. There's going to be cases where it is going to be competing with Gaudi. We have to sort them out as we go face the customer because they have very solid swim lanes in their own right," Gelsinger said during a press conference.

Related

Categories:AI/ML/DL Tags:Arctic Sound,Gaudi,Gaudi-2,GPUs,Habana Labs,Intel About the author: Tiffany Trader

With over a decade’s experience covering the HPC space, Tiffany Trader is one of the preeminent voices reporting on advanced scale computing today.