Opacity hurts. I was in a Sovrin IoT call when I heard the phrase “juicy telemetry.” A digital twins product leader was bemoaning all the really good data held back by the manufacturers of equipment. Data they needed to properly model their twins, to keep them current, to validate their assumptions. For good and bad reasons, the makers of wind turbines and cars and ship engines and weather sensors choose to hide data.
Some of it is raw sensor data, like an unprocessed image that’s turned into a human pulse rate. When you’re combining data from diverse manufacturers and generations of equipment (like just about everybody), you know that understanding how raw data is collected, how it’s processed, how it’s interpreted means everything to the consistency and semantic meaning of your aggregate data set.
Other times it is metadata about the device itself, like versions of the operating system and apps running on the device, useful in calibrating data sets against data quality. In the 2018 version, ocean temperatures were calibrated to standard X but those made after 2023 we’re tuned to standard Y.
And it is increasingly useful to understand the robots inside our robots, the components that have their own information systems that affect data quality and provenance. Nested dolls in conversation.
Another dimension of device opacity is less juicy and more accountable. Things are windows or portals through cyberspace to other parties. Do you have any idea how hard it is to get clarity from a system about who owns it? Who it talks to? Where its data goes? Who regulates it? Hard.
Digital twin operators and others who consume data from devices spend vast amounts of time wrangling data into shape. This is common in data science too. I’ve heard 80 percent of effort in passing, from several folks.
But manufacturers have become a data bottleneck. By choice, sometimes.
For expediency, like trying to preserve device battery life or conserve data consumption.
Sometimes out of fear. That a rival might copycat them. That a whistleblower might point out errors. That a lawyer might seek discovery.
Occasionally out of greed, hoping to extract fees for premium data or premium analysis in the future. Or to secure service revenue by obscuring information needed to repair. Off a device already paid for by a customer.
So what can product managers do about this?
No, really! What can someone who wants to address this issue do at the earliest stages in product conception and design to make this better?
Wider Team are experts in decentralised identity, helping clients assess risks, identify opportunities and map a path to digital trust. For more information please connect on LinkedIn or drop us a line at email@example.com.
Leave a Reply