Most data processing today takes place in the cloud, on powerful servers in mostly remote data centers. In the last 5 years the cloud business has exploded. In the last year alone, the market grew by 21%.

So it is only natural that the processes that seem the most intelligent to the end user should run there. For example:

  • Product suggestions for food that is actually scarce in your own household.
  • The photo gallery on the smartphone can be searched for keywords that you never created yourself.
  • The social media feed suggests new areas of interest that are actually interesting.

After centralisation comes decentralisation

Without a doubt, these advances in these features, which are mostly based on artificial intelligence, would not have been possible without the cloud. This shift of computing power from on-premise to the cloud is not the first. Since the mid-1970s and through the 1980s, the location of computing activity has shifted from large computers in data centers to PCs on the desktops of private users. In 1971 the Intel 4004 was introduced. Central processing units / CPUs enabled the rise of standardized operating systems such as DOS and Windows. Since the middle of the 2000s, programs have been loaded onto the graphics card. At the end of the supply chain of digital information today are powerful smartphones, tablets and desktop PCs with CPU and GPU. Nevertheless, performance-intensive computing operations still take place in the cloud.

In addition to FPGAs, GPUs are used today most for deep learning processes. Compared to CPUs they are much more efficient. However, an up to 10-fold increase in efficiency can be achieved with ASICs (Application Specific Integrated Circuit). The form and architecture of these AI chips currently still varies as much as in the early days of GPUs, until Nvidia’s GPU architecture became established as a general model. Samsung, Huawei, Google, LG, Baidu, Intel, Alibaba and other companies want or have already produced AI chips and are using them.

In 2016, Google presented the Tensor Processing Units, special chips that process artificial intelligence tasks in data centres. These tasks are mostly the training of AI models using data sets. Two years later Google presented the Edge TPU. Smaller than a one cent coin, this chip will be used at the “edge”. And, contrary to its cloud counterpart, the task is a different one: The actual conclusion/detection based on the imported data. These two chips and the Google Ecosystem that uses them show where the trend in edge computing will go. Users of the Google Gboard will receive improved suggestions based on all other users with similar environment, language and other similar characteristics. Huwaei is now using this on-device training in its P20pro smartphone to improve battery life. Federated learning will continue to spread and centralized training data in the cloud will only make sense in exceptional cases.

With edge computing, data processing also means data protection

Currently Amazon stores MP3 files of voice commands that Alexa devices record from their users on its servers. These are then used to further develop their own AI models. This is by no means a best practice and is usually an aspect against the purchase of Alexa devices due to privacy concerns.

The transfer of photo, video and audio data to the cloud is also bandwidth and energy intensive. Especially for many IoT devices without network connection and 4G instead of WLAN connection this is an important aspect. Constant communication between intelligent IoT devices and the cloud has a negative effect on their runtime. With edge computing and AI chips, they become more independent. And AI training runs will also establish themselves in edge computing. The newly acquired data is transferred to the cloud as deltas (weighting of the AI model) and is only a fraction of the size of media files. In the cloud, the central model is then updated with the newly acquired data from millions of IoT devices in use.

Fine tuning of AI simultaneously on millions of devices

This model of the interplay between cloud and edge computing will play a critical role in the development of reliable and secure intelligent products. An autonomous vehicle that is one hundred percent error-free on American roads will nevertheless be overwhelmed by some problems in the old city of Brussels or in the rush hour traffic of Hanoi. To overcome these problems, the vehicle must be able to perform deep-learning tasks on site.

On-device training and the decentralisation of productive AI models will bring about a considerable expansion in the field of edge computing.

Share this Article

Leave a Comment

EASY SOFTWARE develops software solutions and actively drive the digital transformation for efficient, secure and mobile work with digital business processes. EASY integrates these into existing IT infrastructures and generates sustainable added value. This makes digitization a quick and easy experience for their customers.
This might also interest you:
Hertha BSC setzt administrativ auf die elektronische Eingangsrechnung
EASY Invoice for the win – at Hertha BSC
Microservices with ApiOmat - An Easy Goal
For the digital flow: Realize business processes with microservices
Green Economy - being economically successful and protecting the climate at the same time
The green economy – green food for thought for your company
Back to topic Next Post