Data comes in many shapes and sizes – each enterprise has its own requirements. When beginning your AI journey, you need a platform that gives you the flexibility to use the frameworks and infrastructure designs that work best for you.
Your data should determine your infrastructure
Successful machine learning and deep learning deployments are achieved by layering the right data with the right tools, approach, and infrastructure.
Points to Consider
How data influences infrastructure decisions:
คำถามที่พบบ่อย
คำถามที่พบบ่อย
The type of data you want to use makes a big difference to the infrastructure decisions you should make, both in terms of hardware and software.
Structured data is where most start today, and includes information like financial, customer relationship management (CRM), and sensor data. Unstructured data is becoming much more common but takes more resource to process – for example raw text, web pages, and voice data.
For deployments that require low-latency results, the location of your data and the hardware used to process it is essential. For less time-sensitive applications, running training and inference in the data center may be the best choice – but the question of cloud and on-premise remains.
The IoT will include a projected 200bn devices by 2020, and the data produced is expected to total 40 zettabytes by that time.
Modern data management infrastructure is essential to cope with the strain this will cause and take advantage of the opportunity it poses. Optimized data tiering and performance technology is emerging now that will enable this.
Bing’s Intelligent Search Features Leverage State-of-the-art Machine Reading Comprehension Models to Analyze and Understand Billions of Documents
To meet the computational demands required of deep learning, cloud operators are reliant on diverse and flexible hardware. Bing, the search engine from Microsoft, has adopted Intel® Xeon® Scalable processors and Intel® FPGAs to accelerate the power of real-time AI to deliver more Intelligent Search to half a billion monthly users every day.
Project Brainwave, Microsoft’s principal infrastructure for AI serving in real time, accelerates deep neural network (DNN) inferencing in major services such as Bing’s intelligent search features and Azure. Naturally, it requires ultra-low latency and extremely high throughput to do so.
The project unlocks the future of AI by unleashing programmable hardware using Intel® FPGAs. The architecture is economical and power-efficient, with a very high throughput that can run ResNet 50, an industry-standard DNN requiring almost 8 billion calculations, without batching.
Read the full case study ›
“We made conscious decisions in architecting Ziva’s software to not only take advantage of key math and machine learning libraries of Intel’s, but to ensure that any studio could integrate Ziva regardless of how they were generating animations or dealing with simulated characters—the inputs and outputs—to meet the technical needs of creative teams around the world” – James Jacobs, CEO and co-founder of Ziva Dynamics.
With the help of Intel® AI and machine learning, Ziva Dynamics is transforming how filmmakers create visual effects.
Read the full case study ›
Intel® Xeon® Scalable processors:
your data foundation
From data ingestion and preparation to model tuning, Intel® Xeon® Scalable processors act as a flexible platform for all the analytics and AI requirements in the enterprise data center.
Able to handle scale-up applications with the largest in-memory requirements to the most massive data sets distributed across a myriad of clustered systems, they serve as an agile foundation for organizations ready to begin their AI journeys.