by Leon Rosenshein

No-Code

Lately I've been reading a lot about no-code platforms and the "end of programming", but I don't buy it. Not just because I'm a software engineer and want to keep my job, but because it's just another bubble. We've seen this happen before, in all sorts of fields.

First, as I wrote last year, no-code is just another domain specific language (DSL), albeit a visual one. And done well they're incredibly useful. LabView and Mindstorms are great examples of them. Any visual form builder is another. If you're building something visual then doing it visually is the way to go. For well understood situations where customization is limited then it makes a lot of sense.

Second, to use an automotive analogy, when cars were new you needed to be your own mechanic. You had to fix/fabricate things as you needed them, and only people who could (or could afford to have full-time people to do it for them) had cars. This peaked in the 40's and 50's when the dream was to get an old Ford and make it better than it ever was. Today, cars are (mostly) commodities. The majority of people in the US can operate one, but fixing/modifying them is left to the professionals.

Third, someone needs to build those visual DSLs. Doing one of them well is non-trivial. They take time, effort, and a deep understanding of the problem space. And even then, it's best to have an escape hatch that lets people actually modify what's happening.

Back in the Bing Maps days we had a tool called IPF, the Image Processing Framework. It was designed and built for embarrassingly parallelizable tasks, such as image processing, color balancing, image stitching, and mesh building. You could define steps and tasks within them, and the dependencies between steps. It was the second gen framework, and one of the things we added was a visual pipeline building. Since it was Microsoft the tool for this was Visio, different shapes represented different things, and nesting them inside each other and drawing lines between them managed containment and dependencies. And you could add any custom or required values to any shape in the form of a note, and they would get translated into runtime parameters. Things like required cores/memory, timeouts, retries, and what to do on failure.

And it was great when things were simple. When blocking out a pipeline it was great. Put in the big boxes. Draw some lines. Set the overarching parameters. Move things around as desired to see what happened and if it could be made more efficient. We did a lot of that. And for pipelines of < 10 steps we made quick progress.

But as things got more complex and the pipelines got bigger it got pretty cumbersome. For many things we ended up falling back to the XML the visual tool generated for details and just used the visual tool to move the big blocks/dependencies.

There's no question no-code and visual tools have their place and we should use them in those places. You could get all the info presented in RobotStudio from tables, graphs, and images, but playing time series data back visually is just more efficient and makes more sense. Add in the ability to swap out a node in an AXL graph and run the simulation again and see what happens can be a magical experience. That's no-code at work and we should definitely do it. But to get to that point requires a lot of code, so I don't see the end of coding any time soon.