We Are Entering The Golden Age Of Artificial Intelligence And Computer Vision
Posted by
Md Ashikquer Rahman
Your Ads Here
Not long ago, artificial intelligence and computer vision were things in science fiction, but now they are suddenly everywhere, from Alexa and Siri to kitchen appliances that can recognize the food you are cooking and help you cook perfectly.
But the situation is changing again. Intelligent and visual processing is increasingly happening at the edge. In other words, the calculations are performed locally, not in the cloud. From mobile phones to household appliances, from cars to industrial robots, from cameras to server cabinets in buildings, these systems are all changing. The common theme is that the processing is closer to the sensor than ever before. Why is this and what does this trend mean?
At Edge AI and Vision Alliance, we observed five factors pushing AI to the edge, and we abbreviated it as BLERP. Represents bandwidth, latency, economy, reliability and privacy.
Bandwidth:
If you have a commercial space with hundreds of cameras, you have no way to send this information to the cloud for processing-it will overwhelm all your Internet connections. You just need to deal with it locally.
Latency: Latency is the time between when the system receives sensory input and responds to it. Imagine a self-driving car: If a pedestrian suddenly appears on the crosswalk ahead, the car’s brain may only have a few hundred milliseconds to make a decision—not enough time to send the image to the cloud and wait for a response.
Economics:
Cloud computing and communications have been getting better and cheaper, but they still need to spend money—maybe a lot of money, especially in video data. Edge computing reduces the amount of data that must be sent to the cloud, it also reduces the amount of work that must be done, and reduces costs.
Reliability:
Imagine a home security system with facial recognition-you want it to allow your family to enter the house even when the Internet is paralyzed. Local processing makes this possible and makes the system more fault-tolerant.
Privacy:
The proliferation of audio and visual sensors at the edge has raised serious privacy concerns, and sending this information to the cloud has greatly increased these concerns. The more information processed and consumed locally, the fewer opportunities for abuse. So, "What happens on the edge happens on the edge."
If these are the factors that drive artificial intelligence to the edge, it is faster and more efficient processors that make this transition successful. Computer vision and deep learning seem to be magical, allowing us to extract meaning from millions of pixels or audio samples. But this magic comes at a price: real-time artificial intelligence processing requires billions or even trillions of operations per second. Therefore, a basic requirement of edge AI is a processor that can provide this kind of performance in a price, power consumption, and size compatible with edge or embedded devices.
Fortunately, the deep learning algorithm is repetitive and fairly simple-just the amount of calculation and data is huge. Because of this repetitive and predictable nature, it is possible to create processors optimized for these algorithms. Compared with general-purpose processors, they can easily provide 10 times, 100 times or even higher performance and efficiency on these tasks. This fact, coupled with the widespread belief that billions of artificial intelligence-enabled edge devices will soon appear, has triggered a "Cambrian explosion" of high-performance artificial intelligence processor architecture in the past few years.
A good way to understand these latest developments is to take a look at the Embedded Vision Summit this year. David Patterson (co-inventor of RISC and contributor to Google's TPU architecture)'s theme is: "The new golden age of computer architecture: processor innovation makes ubiquitous artificial intelligence possible."
In this speech, startups and well-known corporate leaders including CEVA, Cadence, Hailo, Intel, Lattice, Nvidia, perceptive, Qualcomm, and Xilinx will showcase their latest edge AI processors and the ability to effectively deep neural networks Tools and technologies mapped to these processors. In addition, system design experts will share insights on the realization of edge AI in a variety of applications, from autonomous drones (Skydio) to agricultural equipment (John Deere) to floor cleaning robots (Trifo).
From Arrow to Xperi, exhibitors at the virtual exhibition will provide you with an opportunity to see the latest processors, development tools and software for edge AI.
We are entering a golden age of artificial intelligence and computer vision...
Your Ads Here
Your Ads Here
Your Ads Here
Your Ads Here
Newer Posts
Newer Posts
Older Posts
Older Posts
Comments