Initially the purposed architecture for IOT machine learning invasion that the edge device would act purely as sensors forwarding data to Artificial Intelligence application running in the Cloud. Over time two issues have surfaced, which indicate a need to push part or all of these applications out towards the cloud edge.
For some applications the possibility of placing increasing computing power outward from the Central Cloud and thereby allowing the AI work to be push out to the Cloud Edge exists.
However for many IOT application constraints of power consumption and battery life preclude such a solution. In those case a hybrid approach where the training and execution portions of a machine learning application are split apart is often the solution. The training portion requiring extensive computing resources remaining at the Cloud Center with execution being ported to low power edge device at the Cloud edge via algorithm optimized for such devices.
The presentation will demonstrate the use of TensorFlow running in an Azure Notebook to do the model training portion of GRU time series machine learning application, which is deployed to ARM M4 low power microcontroller located at the Cloud edge via the ARM CMSIS NN package.
Note: This time series machine learning application like the one discussed in this presentation is more relevant to control application than the more commonly discussed pattern recognition AI applications.
An independent software consultant specializing in the Microsoft product stack Baynet officer