La confusione sull’introduzione del pensiero computazionale nelle scuole è grande. Non so se dipenda dalla storicamente scarsa dimestichezza della cultura italica con la parte logico-matematica del nostro cervello (e della storia del pensiero), da un magari in buona fede ma maldiretto tentativo di colmare il distacco con altri popoli, o dalla scarsa disponibilità di docenti abbastanza preparati e abbastanza coraggiosi da saper affrontare un cambio di paradigma che comunque non è più rimandabile.
Data are the liquid equivalent of ye olde means of production.
The very definitions of property, of rights and accountability, of a social contract, of the role of the State are once again being challenged: a profound and multidisciplinary effort to understand what is happing cannot be postponed any more.
Kevin Kelly, co-founder of Wired, puts forth a Manifesto to declare that data are indeed common goods. There is interesing food for thought to be found inside: the idea of “movage” as a value generator (a somewhat more operational version of Harari’s data flows?) in opposition to the good old (albeit admittedly static) “storage”, the recognition of the symmetry between control and accountability, the focus on meta data.
It is self-evident that «solitary data is worthless», and that data only acquire a value as soon as they are connected to other data, within a given context and according to certain market rules. Notwithstanding, I am bound to say that the commons model proposed by Kelly, who essentially forbids private ownership and therefore makes privacy a fallacy, fails to grasp all the complex dimensions and facets of what data really are.
E.g., it does not captures the fact that neither non-rivalry (the circumstance whereby the cost of providing an asset to a marginal individual is zero) nor non-excludability (the impossibility of barring other individuals from consumption of a given asset), which are the cornerstones of the commons paradigm, are guaranteed by all data “supply chains”: cryptography is a thing, after all.
Nor does it capture the deep significance of the emerging trustless, decentralized models, wherein value is generated by data scarcity itself.
Nor does it capture what makes the business models of the “surveillance capitalists” unavoidable in a globalized scenario, as long as an adequate regulation is lacking.
Nor does it capture (if not superficially) the deep link between the raw data, the inferred data, and the inference engine (i.e., the algorithm).
Nor does it capture the prerogatives and protections connected to the fundamental rights.
And so on and so forth.
In short, dear Kevin, there is still work to be done. Yet credit must be given when it is due: addressing that question was crucial. Now let’s start to dive deeper.
|First published on||Eventual Consistency|
|Archived version||Internet Archive|