Imagine a future where robotics consists of a symphony of customized, specialized embedded systems that talk to nothing but hundreds of thousands of actuators, sensors and each other.
Every robot is different, and each one is next-to-impossible to understand as a whole, complete entity.
Breaking into one to control it would be nearly impossible. Disabling is possible, but seizing control over something that has so much distributed complexity may not be.
If you build a system of small machines, what do you get but complexity?
But do you also not get security? Not job security, because people will want to reject you for writing code they do not understand, and does not follow convention -- but actually has natural resilience to deliberate attack.
Complex customized one-off systems are very hard to attack, because they cannot be easily understood.
Drain your enemy's resources. Just a thought. I could be wrong.
If you write a program with a lot of small, meaningful partitions, it becomes very hard for people and AI to grok it well enough to up-convert it to another language.
If it is complex, AI cannot translate it, and people just give up and rewrite it.
Yet if you (yourself) know it intimately , your main challenge is simply remembering where you put stuff, or which partition is the best fit for extending or modifying its behavior.
And yet, these machines are comparatively few in number, when considering small, 25 cent computers that run a single assembly or C program and are in practically every device we own.
They are embedded systems, anonymous machines that are planted everywhere, that are responsible for a small domain of logical operation.
Well, rack-up one more Chuck Norris film that I will likely never see again!
Thanks @SesameSquirrel, you delivered action for #MondayActionMovie this week!
and it ends like every cop TV show ever made