A Small Gathering of Variables (K5 Spec)

Sat, Jul 8, 2017

I want to vent a bit about the current landscape of Deep Learning frameworks / libraries / middlewares.  Why do all of them have different file formats?  Its like having web browsers that expect different types of responses from web servers.  Hua?

Neural Networks are pretty simple.  Its a graph with weights.  Basically an ordered set of linear algerbra operations.  Now, with dropout and resampling we see amazing non-linear results but the core of what the network is / does isn’t that complicated.

So let’s say you started a MNIST project (cough https://github.com/sio2boss/dl-handwriting cough) that used torch.  Cool.  Performed well?  Yeah.  Ran fast?  Yeah.  Now lets put this code into production or deploy to AWS.

Not so fast guy.  Torch is in Lua…so have fun with that.  Maybe you run a Java shop or you are a Python shop (you should have chosen TensorFlow) and running that fancy multi-layer neural network is not so easy.  There are metrics, alerts, containers, runbooks, and teammates to educate on how to deploy your app with this DL appendange.

Okay, so lets say your team uses Java, you just load your network and weights into MXnet…there isn’t an API for that.

Okay, so lets say your team uses Python, you just load your network and weights into Tensorflow…there isn’t an API for that.

Get the picture?  There is NO portability of neural networks of any number of layers to / from / between frameworks.  Here is the landscape of that problem [Source Nvidia.com]

History is doomed to repeat itself…We are in the Internet Browser Wars of the 1990s but without a HTML specification.  We need one.  Badly.

My feeble attempt is called k5-spec.  The approach is pretty simple:

  1. Use JSON to represent the network of layers (types / functions)
  2. Separate the weights by layer
  3. Store in one file via HDF5 (so any language can be used)
  4. Write a conversion utility
  5. Build native read / write support in frameworks

Looking for the community to join in…