Teaching your sensors new tricks with Machine Learning

A joint webinar with Eta Compute and Edge Impulse

Question and Answers

I agree bandwidth and power plays a crucial role of doing AI at edge, do you also foresee any other metrics plays important role for moving AI to edge in future

  • The most important metrics are Energy per inference, Inference time which translate into latency, Accuracy and Memory usage.
  • They are the four pillars of AI at the edge and you need to balance them to create a usable application.

Can you give some more detail on how the DSP is used to accelerate ML?

  • The ECM3532 DSP is able to perform 2 MAC (multiply accumulate) in one cycle. This accelerate the main workload of neural networks, with is matrix multiplication, by a factor or 4 to 5 compared to MCUs.
  • In addition, many operations in the sensor application are signal processing for data conditioning, feature extraction and frequency domain conversions. The DSP is perfect for these.

Can we collect data using ecm board using ble ?

  • The BLE on ECM3532 can be addressed like a UART with AT commands. It implements a BLE UART service.
  • The board User’s Guide gives the list of available command.
  • BLE data acquisition is not yet implemented with Edge Impulse, this is something we will consider if there is demand.

Is it possible to configure custom neural network model architectures via edge impulse platform? Or upload a custom Keras model as a processing block?

  • Yes, if you go to the neural network block, click the three dots, and select **Switch to expert mode**. Then you have the full Keras API to your availability.

What shoudl we do when the clusters are overlapping  after feature generation ?

  • It’s not necessarily a bad thing if they overlap, machine learning models are pretty great at finding hidden correlations nonetheless. But you can play with some of the parameters in the DSP screen to see if you can find a configuration which separates better. E.g. in the demo we used a 3Hz lowpass filter, but for your data a different configuration might be better. The feature explorer shows you that quickly.

Is that model training on cloud servers?  Could the edge impulse interface use local infrastructure?

  • Correct, training is done in the cloud. If you want to do training locally you can always download the files you captured through the studio, and build your own pipelines there.

You mentioned part of the tool to be open-source? Which parts?

I am working on an application which processes signals and classifies them (binary classification). I am sampling the ADC data (its a square wave and the classification depends on variation in signal’s frequency). Can I use edge impulse to create a model for custom application like this ?

What optimizations u referred to speed up the inference ?

  • We take advantage of the underlying hardware where possible, so we offload DSP operations like FFTs (used in both vibration and audio analytics) as well as operations that are used in many ML algorithms (like vector multiplications) to the DSP. This speeds up inferencing significantly (magnitudes compared to software implementations).

Do you support Arduino? if so, which type?

  • Eta Compute does not support arduino currently. We provide an SDK with C and C++ code with drivers and HAL function to develop a full application.

Whats is the camera u are using with ai sensor ?

  • The AI Sensor board does not have a camera, but microphones, motion sensor and pressure/temp sensor.
  • At Eta Compute we have several demos using cameras on our evaluation board (ECM3532 EVB).
  • We are planning a camera version of the AI sensor that will be available later this year.

What is Edge Impulse’s monitization scheme? You mention free for developers, but who is it not free for?

  • Edge Impulse has an enterprise version that offers team collaboration and tools to build and transform large datasets.

What dataset format are available at edgepulse.com platform? Is accuracy inversely proportional to inferencing period?

  • Data needs to be formatted according to the Data Acquisition format (https://docs.edgeimpulse.com/reference#data-acquisition-format). It’s a pretty simple format that can be encoded in JSON and CBOR (we use CBOR for everything on embedded typically) and contains information about the device, sensors, and a cryptographic hash of the data for traceability. If you already have data in WAV or JPG format you can upload it quickly through the Edge Impulse uploader (https://docs.edgeimpulse.com/docs/cli-uploader). 
  • Accuracy is not directly related to the inference time, however there is a trade-off between accuracy, network complexity and memory size.
  • At Eta Compute we develop neural networks that we validate and fine tune, to optimize complexity and memory, while retaining a good accuracy.

What would be the latest resolution you recommend for video object detection?

  • It depends on the application. As shown on teh slide, we can do simple detection with 32×32 (CIFAR) and person detection with 96×96, while object counting requires 256×256.