True Edge Computing - Part 1

Edge Computing

Edge Computing

 

 

True Edge Computing

What does that even mean?   In the world of software engineering, there aren't paradigms for it.  You can't use the latest streaming (Spark, Flink) technologies, mobile or IoT SDK's to produce it. 

But before even taking a moment to say what is True Edge Computing.... Why would we want it? Edge Computing's benefit is either better performance or cost savings.

For better performance, imaging security checks happening on local server and hardware devices instead of having to check a centralized server saving seconds at peaks or 100's of milliseconds.  In financial trading systems where teams are trying so save nano-seconds to make 100's of millions more.   

For cost savings, reducing the number of central servers by making the edge clients/nodes do more of the work.  Gaming and mobile are areas where this is huge.

So we want it!  What is it? It's really just performing calculations in a distributed manner, client side instead of centrally at a core server farm.  So in mobile instead of calculating tax of a recent purchase that can happen client side, instead of giving central servers one less thing todo.  One calculation might not matter but they add up quickly to millions or billions of calculations.

I am going to invent a protocol to handle this called the "Razor's Edge" protocol.  It's an application level protocol building on top of other protocols like "websockets" do except it's to push calculation to the edge dynamically with smart clients "edge clients" and dumb clients "base clients" that talk to centralized servers.

The MVP for the protocol should be in Erlang for reasons that I'll describe later in Part 2