“Don’t tell my shoes that I am human, they still think I am a smartphone” (The Internet of Things explained to my mother… by Philippe GAUTIER & Nicolas FOGG)

Not only my mother !
The Internet of Things also explained to :
·        My old parents,
·        My Boss,
·        My kids,
·        ...
The Internet of Things was the star at two recent conferences; Web’12 in Paris and the Consumer Electronics Show in Las Vegas. The Internet of Things covers a large area that can be divided across three major categories. The first category is the world of consumer electronics devices, created by manufacturers like Withings, Hapilabs, Bubblino… that have ‘always on’ connections to smartphone applications or direct to social networks. The second category consists of hardware and software platforms that allow the interconnection and assembly of these devices into collaborative networks. Pachube (Cosm), Sen.seArduino and ThingWorx … have created solutions that fit into this category. Finally, the third category consists of the established professionals of the ‘legacy’ Internet, object identification (Auto-ID) and machine-2-machine (M2M) industries. Semiconductor manufacturers, telecom operators and start-ups are also currently working to connect physical world objects to the virtual world accessible via the Internet.
The idea of understanding reality through sensors or automated object identification was established in the 1980s through the work of visionaries like Professor Sakamura of the University of Tokyo and Mark Weiser of the UbiComp laboratory. This early work paved the way for the ideas of Kevin Ashton who coined the term ‘Internet of Things’ whilst working for Procter & Gamble. He later co-founded the Auto-ID Center at the Massachusetts Institute of Technology. The widespread use of electronic tags (RFID) on mass consumption goods presented an opportunity to link them with additional value-added services, while the internet provided the medium for communication back to the end users.
The original concept has evolved. The spread of wireless networks and smartphones has increased the ability for these devices to connect and participate. The future outlook is staggering, it is estimated that by 2020 the planet will host seven billion humans and seventy thousand billion devices, a large percentage of which will be network enabled.
Enough isn’t enough
This multiplication of sensors and their connection to networks will lead to an explosion of information that needs to be captured, stored and processed. Traditional information systems are deterministic and ruled by Cartesian functional analysis. Planning in advance and modelling ‘what should be’, rather than ‘handling what is’ has been the focus of systems development to date. Similarly, our models for organization – economics, management, etc. – are based on basic analytical theory, which works well when the area of focus is small and parameters are few. However, this is not the case as the number of actors and data points in a system explodes.
The recent growth in Web 2.0 has been characterized by a philosophy of openness, community and collaboration. All participants – business, consumers, citizens – are now at the intersection of different organizational spheres: private, public and economic, and the boundaries between them are blurring.
In this context, the current generation of deterministic high-performance and globally interconnected information systems can magnify and spread small failures of all kinds leading to what Lorenz of chaos theory fame called “the butterfly effect”.
Biology teaches us that openness requires adaptation and adaptation requires autonomy. This autonomy is not so much for the ability to operate independently, but the flexibility to adapt to changes in the environment as they occur. Today’s computer systems are not autonomous, and are thus not capable of adapting without external direction. The human remains the only actors in modern organizational systems that when faced with uncertainty or the unforeseen can use self-learning mechanisms in order to adapt.
Web 2.0 can therefore be defined as the period in which human-actors have been able to express themselves on the Internet as autonomous and rational while computer systems have revealed their deterministic pre-programmed limitations.
In the 50s and 60s the Sociotechnical School sought to demonstrate that the introduction of technology would impact an organization and its evolution over time. According to them, not considering these two dimensions - social AND technical - of an organization would be dangerous. We experience this theory in daily life, the human retains responsibility (and is the victim of) the operational rigidity of most complex organizations. The worst examples of which include high frequency stock trading that in a few milliseconds can cause a financial crisis; faster than a human can see or control.
In this context the Internet of Things will only make matters worse. Increased sensory abilities and reactiveness of our software and objects will not make them more intelligent: a frog, even equipped with electronic prostheses remains a frog in the world of frogs. What is true for the frog is also true for interconnected objects. Nevertheless, the Internet of Things is an actual trend and doing nothing will catalyze the digital chaos. Despite being forgotten by many computer scientists, autonomous information systems are the most promising solutions available, also known as ‘artificial intelligence’ or ‘cybernetics’. The Internet of Things is then an opportunity to redefine our relationship with automated systems… and is therefore a threshold.
We are faced with a paradox, our information systems and our ability to cope with massive complexity is immature, yet we are on course to continue “empowering” our objects, which could prove disastrous.
The avatar of ‘horse meat packaging’ acquired the system date… and worried…
An appreciation of the Internet of Things requires an interdisciplinary and systemic approach (http://en.wikipedia.org/wiki/Systems_theory): a problem must be recursively analyzed from several angles and then put into broader contexts. This type of analysis will result in answers that are as efficient and effective as possible, and are able to address problems at the subsidiary level in the organization (http://en.wikipedia.org/wiki/Subsidiarity)... That is to say the one of physical objects that ‘lives’ in the real processes.
In practice this will require the association of external software intelligence, especially for systems that do not have sophisticated embedded cognitive ability (such as a vehicle’s ECU). It would be economically absurd to embed such intelligence in yoghurt packaging; however this yoghurt packaging would contain either a barcode or an RFID identification that would feed data into the cloud-based intelligence with which it is linked. This linked intelligence, or ‘avatar’ allows physical objects to be represented in the virtual world. Just like humans, it will not be possible to foresee all possible scenarios, but it will guide the object through the operational context, just as a driver does with her car. This system of guidance will possibly take input from sensors and geospatial measurements to interpret the context of its surroundings, and use self-learning to continually adjust and create the desired outcomes, according to the pursued objectives. The coupling formed through the avatar software and physical object – called CyberObject – is a level of automation capable of making decisions and adjustments during the execution of processes under human delegation. The avatar forms a feedback loop between the physical and virtual until they become one.
In many industries CyberObjects could become economic actors in their own right, both as physical objects and the composite objects their avatars represent; as services, businesses, capabilities etc. Along with humans, the actors in Web 2.0, they will help create an internet consisting of entities that are autonomous and capable of self-organization and feedback driven self-learning to adapt to new situations and self-complete (under human control).
Shall we find elsewhere what will be lacking here?
Whether the intelligence of these objects is centralized or distributed, the current internet is not currently sufficient for the needs of these devices. It is likely that the internet will evolve and become something quite different from what it is today. In sophisticated living creatures the glands, nerves and brain are not of the same design, nor do they produce the same data. Interpretation and response is often delegated to subsidiary levels, for example the spinal reflex when a hand touches a flame. Such structures allow for decentralized specialities that can at times provide substitution for other parts. Similarly, many protocols and network architectures are being created by an ultra-specialised industry that seeks to replicate nature and install the capacity required by the Internet of Things. The next step will require connecting all the small electronic devices, sensors and controllers that exist today, at low speeds and low power. This may even require giving them the capacity to generate their own energy.
The energy required to operate these information systems is a real problem. In 2012 data centers consumed 30 billion watts of electricity, the equivalent of 30 nuclear power plants (Source in French: http://www.lemondeinformatique.fr/actualites/lire-datacenter-jusqu-a-90-de-l-energie-developpee-est-gaspillee-50605.html). These data centers are often powered by coal, and use diesel as the backup source. The global demand for electricity is not fully covered by the generation infrastructure currently available, and in times of peak demand power outages occur, even in industrialized countries.
It is therefore reasonable to ask if there is an economic or ecological benefit in the internet of things. To help us understand the potential benefits, consider the tub of yoghurt, forgotten at the bottom of a store shelf only days from expiry. This tub could sense its fate and put itself on promotion. Linked to the store’s loyalty system, it could notify all loyalty card customers currently in the store via their mobile phone of the promotion. The point of sale equipment would be similarly notified to honor the discount. The tub of yoghurt is saved from the dumpster and the energy required to manufacture and market it is saved from waste. However it is difficult to prove that the total energy and effort required to create and host the avatar linked intelligent systems is of lesser or greater value than the waste it avoided.
The issue of data storage is also important. How will we support the vast quantities of data that the internet of things will create when we have already created 1.8 zettabytes of data in 2011 and that figure is set to double on average every two years? (Source in French: http://www.01net.com/editorial/534919/1-8-zettaoctets-de-donnees-produites-et-repliquees-en-2011/. 1 zettaoctet = 10^21 octet…). The energy needs and storage requirements are prompting some to seek new and novel ways to store data. Biological DNA could not only allow very long term storage, several thousand years if the conditions are right, but it would also allow very dense storage. A single gram of nucleic acid can contain more than 455 billion gigabytes (Source in French: http://www.biofutur.com/L-ADN-l-avenir-du-stockage-de-donnees). Researchers at the European Bioinformatics Institute have already begun to conduct experiments proving this is possible.
What can we make of this? In Western culture, man is doomed to domesticate the ecosystem. Scientific progress is made through the application of technology to this goal and its associated problems. Once the local ecosystem is exhausted, it is still possible to acquire the necessary resources by exploring further, eg: the colonization of space. In contrast, proponents of degrowth advocate the conservation of the environment’s limited resourced and denounce the idea that technology has all the answers. More practically, the future of technological advancement is only viable in the long term if we take care to optimize the use of our resources. The solution is not only to build new roads, but to optimize the vehicles that use them. Resource optimization will require a new set of design patterns for our information systems: we assert that transforming inert objects into actors with the innate ability to monitor, adapt and seek resource optimization will cost less, take less effort and produce better results than attempting to use current centralized and deterministic systems design patterns.
How shall I name my trousers?
This question is not as trivial as it seems, should we call them a cotton field? A coil of thread? A piece of cloth? A commodity? An order? In the internet of things the issue of object naming and identification is a core problem. It must be possible to identify one bottle of water from another, one instance of a provided service from another etc. Today there exist many different identifiers; retail barcodes, mobile phone SIM identifiers, IP addresses, domain names, GPS coordinates, and so on. To name an object is to distinguish it and provide it with a role and identification amongst the categories or concepts in a domain or process. In all communication it is essential to identify the transmitter and receiver.
But the question of naming is not uniquely tied to the problem of identification. An object (thing, entity, concept, idea, service or resource) can only be definitively identified in the context of a transaction or situation or goal. For Lucky Luke a cup may represent a tool for drinking, but for Joe Dalton this same cup could represent a tool to be used in digging a tunnel. This situational awareness is only possible at the lowest intelligent level; that of the autonomous actors involved in a particular situation or transaction. Naming an object is not just a means of classification, it also gives it a role (trousers protect me from the cold), and roles are played… by actors!
Barcodes have become a routine part of our daily lives, and recently a number of new data carriers have emerged, RFID, 2D barcodes (Flashcode, QR code, Mobile Tag, Ubleam, etc.), 3D barcodes, etc. These new standards can contain more information in a smaller unit, making it even possible to identify a single drug dose. They can enable the linking of products or GPS coordinates to external services or web addresses, and they can include facilities to enable scanning in large quantities or from large distances. There also exist many formal data interchange definitions and APIs (small programs that allow software interconnection ) for these identifiers, allowing integration with information systems provided that all the participants in the system can agree upfront on the format and definition of the exchanged messages.
Naming, identification and communication standards are inexorably linked, where the ultimate goal is the unique identification of all the objects on the planet. In some industries (logistics, FMCG, automotive and aeronautical) the EPC codification standard, designed by MIT and readapted by GS1 (the standard body that rules the existing barcodes on consumer products goods), is in the process of being tested and adopted. It is able to link an object with its manufacturer, but requires the IP protocol to work. Some of the stakeholders (Cisco, Huawei, Alcatel…) are promoting the adoption of the IPv6 protocol. The IPv4 protocol is widely adopted across the internet, but the address range has been practically exhausted. In contrast the address range of IPv6 is much larger. Jean-Michel Cornu explains “if we covered the surface of the earth in a layer of sand fifty kilometers thick, and assigned an IPv6 address to each grain of sand, we still will have consumed one two hundred billionth of the available addresses” (Source : http://www.cornu.eu.org/texts/ipv6-et-adressage in French). However IPv6 does not by itself establish a link between an object and its origin or its nature. In addition, it has two major drawbacks. The IP protocol is verbose and is sensitive to link speed and latency, and this also makes it energy hungry. Applying IPv6 capabilities to all objects in our environment is impractical. However, the development of 6LoWPAN and some other similar protocols should tackle this issue. There also exist other naming and identification systems, some free, some commercial, some with higher or lower levels of structure. Various parties with their own agendas and lobbyists promote these competing standards, and it is difficult to identify the best system on paper (DNS root control is also an underlying issue). It is in the interests of all these systems to interoperate, otherwise we will have ‘intranets of things’, isolated from one another. In addition, suitable systems will need to allow an object to change its identification as it travels through its lifecycle; manufacturing, distribution, purchase, consumption and recycling. Traceability throughout the lifecycle is critical for health and economic reasons, witness the recent lasagna horsemeat scandal, but this must be done within a privacy and authorization framework. The actor who possesses or uses an object at any time must remain fully in control. Would all concerned consumers be happy to advertise the fact they own a sex toy in case the said toy must be recalled due to the presence of BPA?
It would be wrong to pin the potential downsides that society could experience solely on the technologies of the internet of things. By itself, technology is only a tool; any evil comes from the humans who use the tool. In addition the problem is more complex than it appears; despite the wishes of a consumer, a marketing department could uniquely identify them by creating a signature based on the identifiable objects that the person is wearing or carrying, despite the consumer not advertising a specific ID.
Relocate manufacturing?
The internet is a collection of resources that are defined by the two usual economic measuresutility and rarity (scarcity). The utility of a resource is related to its effectiveness in helping pursue a particular purpose or its versatility. There can be as many uses of a resource as there are actors using it; depending on the objectives pursued the service rendered by the resource may vary. Thus the utility of a resource is a subjective concept that can only be measured according to the situation and actors concerned. Scarcity is directly related to the notion of availability: a thing can be abundant yet rare if accessing or using it is difficult due to extraction, manufacturing, marketing or regulatory constraints, etc. The internet itself can be useful and abundant in areas of high broadband penetration, or useful and rare in poorly serviced regional areas. These conditions can also change over time. Today in many areas the infrastructure of the internet is abundant, it can be accessed anywhere at anytime and anyhow and therefore has little economic value. Extracting economic value has become difficult for network operators who must morph into service providers of additional capabilities (content, online services) to simulate demand. In addition, in order to create scarcity, operators provide varying levels of capacity according to the price paid by the consumer.
In today’s economy, the price of an object corresponds to the cost of acquisition for its owner: this economic model is mainly based on property ownership. The reason for this is simple: the best way to access a resource that is regularly required (without knowing in advance exactly when it will be required) is to always have it available. To own something is therefore a response to the problem of scarcity. Some objects with high value are however available through share ownership, but subject to prior organization: car sharing, bike-hire, property leasing, transport, etc. In this case the price of an object is its rental cost – or usage cost – for a given period. The cost of the necessary organization for establishing such a model is dissuasive enough since it is too complex to be applied widely to objects of lesser value.
But if we make objects capable – at their level and at low cost – to self-organize in order to allow share ownership (with its own conditions for access and sharing), the need to own these objects becomes less important. The challenge is then to delegate to autonomous objects that ability to self-manage, in real time, according to the objectives and constraints of different users. Ultimately, this means commoditizing the necessary software intelligence to enable the dissemination of trivial - but smart - objects such as a yoghurt tub in the supermarket. By generalizing objects/actors, making them economic agents in their own right, it won’t be necessary – for example - to own a ladder that knows how to auto-share. These CyberObjects will thus contribute to a transformation of our economy from one of consumption to an economy of usage (or economy of utility).
A CyberObject capable of self-organising will also be able to learn from feedback, if the intelligence with which it was provided permits (thus the necessity of an adaptable and scalable model for software conception). This would allow the formalization of experience into knowledge that could be diffused, at a price, to other CyberObjects that will face similar situations in their lifetime. Thus, from simple beginnings as economic agents, CyberObjects can progressively create added value. They could buy and sell knowledge on public exchanges on the internet, thus contributing to and commoditising the knowledge-based economy. This is the second stage of the economic revolution. Through the gradual commoditisation of their goods/services nature, CyberObjects will specialize their behaviour according to the ecosystem in which they operate. This behaviour will be linked to the systems of values, codes and rules… that is to say, the local culture in which they will be plunged. To create a CyberObject will not only be to manufacture the physical part of it, but to also to program and maintain its avatar in the future environments in which it will find itself. The manufacturers of these CyberObjects must therefore have a good understanding of the target ecosystem. This implies that they are also immersed themselves: the CyberObjects will then encourage the relocation of manufacturing. In terms of social-connection, the icing on the cake may come in a time when an ‘auto-sharing neighbourhood lawn mower’ under difficult arbitration asks me to speak directly with a neighbour to resolve a problem ‘between humans’.
Governance in the internet of things
The term net neutrality is often confused with “open or free access” or “respect for privacy and anonymity”. There is also a lot of fear and uncertainty regarding the widespread use of unique RFID tags in consumer goods: they are easily readable at distance by unauthorized third parties. In the internet of things, new sets of resources and capabilities will emerge while others will disappear. Power relations that govern the current organization of actors and their use of shared resources will be transformed by the arrival of new cyber-actors. Different forms of organization – based on new behaviours and value systems – will emerge, rise and supplant those that already exist. It is difficult to predict these developments and we cannot foresee the ethical implications until these effects are observed. This reactive approach is consistent with the history of the internet, where regulation comes after the effects of change have been observed and considered. Thus, adapted governance will need to cover the dynamic organization of a continually evolving set of resources and access & sharing conditions. It will need to consider the intentions, behaviours and value systems of emerging players.
This poses an ambitious scope:
·        What system of values and ethics do we need for the internet of things?
·        How can we ensure the autonomy of actors without turning it into a free-for-all?
·        Who will be the overarching authority?... Nation states?... Private companies (eg: current issues between Google and some European states)?... Public citizens?...
·        With what means of observation, analysis, action or coercion shall this authority be provided?
Whatever the answer, the idea is to ‘guide’ the ecosystem rather than trying to ‘control’ it, and to avoid the creation of hegemonic controls that could turn into lock-ins. Consequently, each CyberObject will need to be able to adjust its behaviour according to the ecosystem in which it finds itself, or else find its access limited or denied (from the private sphere of a person for example).
On the other hand, standardization and regulation are the natural responses to these questions and are usually successful in a closed or slightly open system. However, attempting to create general rules of behaviour and interoperability in an environment of uncertainty, change and on-going process discontinuity will be futile. To ‘control’ is to attempt to define a complete set of behaviours and outcomes for all actors, while in reality the behaviours will be complex and sometimes chaotic. Standardisation is a way to achieve efficiency in functional niches, but will not be suitable at a global scale.
Actually it will not be easy to implement governance in the internet of things, such as it is to implement governance in human societies.
“Inanimate object, do you have a soul?” (Alphonse de Lamartine)
The most sophisticated CyberObjects will have an awareness of their means (behavioural rules), preferences (when faced with conflict, the ability to prioritise behaviours) and their rationale (the goals they pursue and their ability to self-manage when needed). Behavioural autonomy of CyberObjects (innate or acquired through feedback) is a non-trivial concept and needs careful consideration. The existentialist Sartre explains that for inanimate objects, "essence precedes existence". The ‘paper cutter’ is a concept that affects the craftsman who must make it, even before the cutter is produced. As inert objects become associated with software avatars, they become actors (CyberObjects), that is to say, capable of inventing their own history. They will define themselves dynamically during the course of their lifecycle, and will then comply themselves with the same status than human entities for whom “existence precedes essence”; and which for Sartre is the base of the notion of freedom.
It is also impossible to avoid referring to the ‘technological singularity’, this moment of history at which point technology overtakes the capabilities of humans and begins to enact social and technological changes that we can no longer comprehend. By providing our own creations with the unique behaviour to self-define, are humans not committing hubris?
In any event, the ontological rupture induced by the Internet of Things highlights concepts are not new (Antiquity, Renaissance : E.g. http://en.wikipedia.org/wiki/Charles_de_Bovelles) or not unknown in Eastern religions or philosophies: according to Shinto traditions, any object has a spirit. But we must also bury the idea that mankind is doomed, thanks to technology, towards absolute knowledge (an idea underlying in either positivist or transhumanist philosophies). Indeed, by creating an artificial intelligence model in order to dispense it to inert objects, homo-informatics involves the complexity and depth of notions of intelligence, consciousness and knowledge. To create objects in our image is to focus on ourselves in a strange feedback loop, one where the observer is also the observed.
It is also an opportunity to place these CyberObjects in a broader perspective; “The major change that we are experiencing  lies in the transformation of different forms of collective intelligence into a globally interconnected intelligence  across the earth”(In French : Gilles Berhault, « Développement durable 2.0 – L’Internet peut-il sauver la planète ? », Edition de l’Aube, 2008). Vernadsky , Teilhard de Chardin and Robert Vallée are not far away, nor is the ‘Cybionte’ of Joël de Rosnay . To take back control of their own history, in the sense given by Sartre, humans must adapt to the new ecosystem that they are creating.
And, as noted by Jeremy Rifkin, this adaptation is made by changing thought patterns, both at the individual and collective levels. In the book “Devenez sorciers, devenez savants” (“become witches, become scientists?”) Charpak and Broch wrote: "Decisions of the upmost importance must be taken by our societies in order to cope with the inevitable consequences of human presence on the planet  the genetic capital of the caveman  probably did not change during the hundreds of thousands of years that separate us”. Thus the real challenge is raising our collective consciousness, without which our “science will ruin our souls” (Rabelais). CyberObjects can be mirrors of our consciousness that can help us improve … or help facilitate the loss of our human condition.


This article is also 
available in French on the same blog.
©Philippe Gautier (http://en.wikipedia.org/wiki/Philippe_Gautier) & Nicolas Fogg.
This text or part of this text can be reused if proper reference to its authors is made with a link on the original source.
Copyright : This article is licensed under the GNU Free Documentation License (GFDL). You can download the full text, under the Copyright conditions expressed above and available here.


Philippe Gautier is managing director of Business2Any (http://www.business2any.com) which specializes in the design of distributed intelligence systems.
The company, a pioneer in the internet of things was the first to introduce the concept actor-objects – CyberObjects – which has since been adopted by many thought leaders in the field.
Philippe has extensive experience as a director of information systems and has had the opportunity to be the first to implement all technologies defined in the EPCGobal standard for the GS1’s internet of goods.
He has received the GS1 2005 award for innovation, the PME 2006 trophy, and two trophies in GS1 2005: winner SME and special jury prize.
He specializes in the systemic methods of B-ADSc (http://www.b-adsc.com) and is the author of numerous articles in print, online and in academia.
He regularly interviews at conferences, roundtables, on radio and debates.
Finally, he is a founding member of the SEI (European Internet Society) and is the principal author of “Internet of Things … Internet but Better” published by AFNOR in 2011 (ISBN: 978-2-12 -465316 to 4).
Mail: pgautier (at) business2any.com

Nicolas Fogg contributed the English translation of this paper.
Nicolas is a software development professional and is the Director of Technology and Development at Infomedia Ltd, a cloud based provider of software for the automotive industry.

Commentaires

Posts les plus consultés de ce blog

Réflexion socio-économique autour de l'Intelligence Artificielle

B-ADSc, Bucki-Analyse Décisionnelle des Systèmes complexes

France : 5G or not 5G ?