You can downloas the full presentation in PDF file (registered and copyrighted !) : see below, at the end of the post.

Existing Information Systems are shaped the same way than the organizations they are supporting.

  • Basically via « top-down » approaches that focus on main functions or processes….
  • Then sub-processes
  • down to procedures then instructions

In those approaches, “time” is a linear dimension and a common reference for each process

Methods are essentially deterministic and analytic, based on hierarchies and causal factors between processes.

Most of the time, those approaches are used to address limited scopes:

Example « closed » or « semi-open » value chains such as companies, groups of companies, single or groups of IT systems.

In those loops, interoperability and interrelationships are duly known and controlled.

Within the last 10 years, transversal technologies, such as EDI, EAI, SOA, etc. were proposed to address the issue of “opening Information Systems”.

So did “industries driven” standards that are supposed to unify approaches and solutions (EPCGlobal is one amongst many others).

However, the existing Information Systems are not part of the real organizations they support but a parallel and abstract modelization.

Actually it describes « how to do - that are MEANS » instead of « for which purpose - that are OBJECTIVES ».

UML, as a description language, is a perfect example of this.

Since it is disconnected from the operations, periodic adjustments are needed to fix the gap between the data contained in the Information System versus reality.

Objects identification is spreading either through barcodes or new wireless technologies such as RFID.

Subsequently, the links between reality and virtuality are increasing, although both Information Systems and the Internet are not suited to process huge amount of various and contextual data.

Identified objects are crossing many different « open value chains » during their lifecycle.

Since the actors (enterprises, etc.) are different, objects are facing different « repositories » and “registries”.

In addition, access points to the internet are also increasing, mainly due to the development of the mobile phones.

This leverages the role of the « citizen-consumer » as a full actor of those open loops.

All that, combined, strengthens a kind of a « complexity » which emphasizes the incapability, for Information Systems, to manage or pilot daily operations.

Current Internet Governance, which inherits from those former approaches, is therefore based on a « control of means »:

  • Control over the namespace (through the ICANN or, more recently, EPCGlobal)
  • Control over the infrastructure (via the DNS system)
  • Common definitions or modelizations for Exchange protocols and data schemes (example : EPCIS or TAG DATA TRANSLATION)
  • Control over the Internet Service Providers (locally or by countries) based on semantic rules
  • Etc.

The existing Information systems still remains unable to manage « ad hoc » or contextual processes that have not been foreseen prior.

Therefore, on either the current Internet or in the exchanges between IT systems, « human beings » are still the exchange initiators.

They still control the format, the content, the circumstances and the moment of the exchange.

Web 2.0 did not change that at all - the either « contextual » or « unpredicted » part of the exchanges is still monitored by human brains (plenty of them).

However, the purpose of the Internet of Things is less facilitating exchanges but to create an environment in which sharing will be possible in satisfying conditions.

“Satisfying conditions” means « acceptable » by any of the autonomous actors; either they are human beings, or Information Systems, Objects, etc.

Contrary to « exchanging », « Sharing » is purely contextual, adapted to a specific situation and can be unique.

How to create such an environment?

In giving the Information Systems «intelligence» and «autonomy in decision making ».

One of the main stakes is to support unpredictable situations where actors that do not have prior knowledge of each others are capable to share.

In such a case, each actor comes into the « sharing » with his own objectives to make it converging (example: the seller intends to earn money and the buyer to satisfy his needs. Convergence of aims is the act of buying).

Giving either « autonomy in decision making » or « intelligence » to Information Systems is therefore no more than to pay attention to « objectives » of each actor instead of interesting in « means ».

The « Internet of Things » will likely be:

A « non deterministic and fully open cyberspace » in which autonomous and intelligent entities will fully interoperate and will be able to auto-organize themselves depending on the context, circumstances or environments.

Here, “autonomous” includes “self-referencing”.

This will give them the ability to « share » with any other player making their own « objectives » converging.

It will be “event driven”, « bottom-up » based and will act at any “subsidiary level”.

It will also be considered as a « COMPLEX SYSTEM » due to the huge amount of different and various links, interactions, between various and different actors.

It will be able to integrate new actors with no difficulty.

In this « Internet », the meaning of an event will be based on the context of the event and not on a pre-determined or syntax-based model: this will also be a Semantic Web.

This Internet of Things will be able to work without any « common standard » which will not be able to address billions of exceptions being generated in such a complex environment.

Last but not least, it will be made of billions of parallel and simultaneous events: time will therefore no more be used as a common linear dimension but will depend on each process or entity.

This Internet of Things will be accordingly based on massive parallel IT systems.

In the « classical » or « functional » approach « 1+1=2 ».

Systemic approaches says that « 1+1>2 ».

In other words: « the whole is bigger than the parts ».

Why? Because the association of individual elements in one new organization makes new trends or properties emerging.

Those new properties are due to the whole organization itself and not from the constituting elements.

Example: the addition of 2 atoms of hydrogen and 1 of Oxygen makes water and water is more than just addition of 3 atoms.

Any accurate vision on the Internet of Things will therefore be based on the addition of all the components: the Web + the objects + the “Information systems” from companies, citizen, public bodies, etc.

Accordingly, only a Systemic approach will be useful to tackle the issue.

It is thus necessary to change our approaches and to switch to systemic ones that combine both « top-down » and « bottom-up » methodologies.

On one hand systemic approaches gives us “adapted tools” to tackle the stakes of the upcoming « complex system » the « Internet of Things » will be.

On the other hand, it helps to perceive the emerging of various new complex organizations - let’s call it offspring - whose objectives could interfere in that system.

With current methods, such interference would just be seen as chaos and couldn’t be addressed by actors. To the contrary, Systemic approaches give them means to understand and to manage properly those offspring.

In such a context of interference, it easy to understand that ruling, codifying, naming or standardizing are just vain tasks since they are never achieved!

“Standardization” can just back or help any exchange, facilitate interoperability of systems and organizations but can’t be a prior and necessary condition for a better management.

Current works such as EPCGlobal, IETF, etc. are therefore « auxiliaries » and catalysts but should not be seen as the main path towards achievement.

Systemic tells us that understanding or to knowing the behavior of a complex system is only possible with a global apprehension of the whole system in its context instead of studying each part separately.

In such a system, emerging trends or latent organizations are quite common, even though not foreseeable.

Most of the time, those kind of organizations are auto-catalyzed through « positive feedbacks » and can be visible once it becomes a new auto-organization.

With the increasing of complexity, there is a risk is to notice new trends or organizations that would enter into an auto-selection process then into a « lock-in » one and become accordingly « hegemonic » (this is the case for “de facto standards”).

Systemic approaches are well-suited to analyze such processes and can give adapted means to do so.

It will give keys to apprehend the underlying « objectives » of those emerging trends or organizations and give better understanding of their future impact (positive VS negative).


The purpose here is to create a system of « common values » based on ethic: « all that is not forbidden is allowed but can be forbidden later on if effects considered as negative »

This control could be inspired from the “US law system” based on « jurisprudence », compared to the “European one” often based on the « Napoleon Code » with an exhaustive approach and based on prior apprehension.

It is therefore not adapted to govern the Internet of Things « by issuing decrees » but it is necessary to monitor it to detect potential side effects or inappropriate outputs.

This « new governance » would be backed by a common and ethical “value scheme”.

Referring to a complex and continuously evolving system, this “value scheme” would therefore be a « living » one.

Accordingly, it would be adapted periodically:

  • that would give the ability to analyze properly each « objective » behind each emerging organization (versus « means »)
  • And to compare it to the global objectives of the whole system

To say it differently: the purpose would be to qualify each new emergence to avoid chaotic or hegemonic situations.

Such a value scheme would be beyond what is known as « constitutions » since it would address:

  • Constituting values (the ones “everyone agrees”)
  • Behavior values (depending on the context, local rules)
  • Economic values (added value, rarity, etc.)

To adapt the governance, it is necessary to change, on a wide scale, our approaches and our analyses on the upcoming IoT.

One single value scheme, as it is explained in the previous slides could be managed by a community of « wise people » renowned at the worldwide level and not necessary belonging to the « Information System » community.

They would be autonomous, free as possible from any constraint or control and would endorse the « ongoing management » of the living « value scheme » in order to:

  • Integrate the emerging organizations and limit accordingly any pervasive chaos
  • Weaken or limit the effects of discrepancy in objectives

We need first the politicians to become aware of those stakes and mechanisms in the upcoming IoT to put this all in place (« value scheme » + « Ethical steering committee ») :

it looks like we are far from it today since global Governance is still based on the « control of means » in many activities : the current “economic crisis” is a perfect example…...

…..What if we would have used an « objective driven » control instead?