One of the machine learning researchers I follow closely is @petewarden, who writes extensively about the hardware side of ML, something we hear very little about beyond vague accounts of the power-consumption and carbon footprints of cloud-based GPUs.

1/
In a followup, Warden explains: the rise of Tinyml (a machine learning framework for low-powered, embedded processors) and the trends in hardware point to a near-future scenario where a $0.50 CPU replicates today's high-power, networked based speech recognition systems.

3/
The infrastructure for these systems is coming from multiple quarters. Take Mcunet, which "designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and CPU."

https://arxiv.org/pdf/2007.10319.pdf

5/
Early results are promising. Previous ML image classifiers of this scale capped out at 54% accuracy; with Mcunet, an MIT team brought that up to 70.7% - in a field where 1% accuracy bumps are considered impressive.

7/
All of this is in service to a better version of machine learning, one where the classifier lives on your device and doesn't transmit observations to a vendor: surveillance-free decision-support.

Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg

CC BY:
https://creativecommons.org/licenses/by/3.0/deed.en

eof/
You can follow @doctorow.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.