After staying mouth agape yesterday as an eccentric mob stormed the US Capitol, unionizing Google would make the perfect atonement (as far as Masse und Macht goes, at least).
Digital rights is a very broad term that encompasses a wide gamut of areas: from relatively new ones, such as the access to digital media, electronic devices, and communication networks, to the reframing and realization of existing rights (privacy, freedom of expression, citizenship, and so on) in new the setting brought on by the digital revolution.
The Electronic Frontier Foundation identifies six “issues”, i.e. sectors of monitoring and intervention:
- free speech: the “ability to use the Internet as a platform for free expression through law, technology, and activism”:
- privacy: in which regard “respect for individuals’ autonomy, anonymous speech, and the right to free association must be balanced against legitimate concerns like law enforcement”;
- creativity and innovation;
- transparency: which amounts, ultimately, to ensuring that governments respect the civil liberties of its citizens;
- global policymaking;
- security: including “privacy and anonymity, DRM, censorship, and network neutrality”.
Other frameworks do exist, and, depending on which one we choose to apply, our mileage may vary. But it goes without saying that such topics, or generally speaking the digital rights as a whole, can be traced back to fundamental human rights, both from an individual standpoint and from a social one. Nor is it just a matters of accessing and using computers and digital devices: for, as Italo Calvino writes in his Six Memos for the Next Millennium back in the late ‘80s,
It is true that software cannot exercise its powers of lightness except through the weight of hardware. But it is software that gives the orders, acting on the outside world and on machines that exist only as functions of software and evolve so that they can work out ever more complex programs. The second industrial revolution, unlike the first, does not present us with such crushing images as rolling mills and molten steel, but with “bits” in a flow of information traveling along circuits in the form of electronic impulses. The iron machines still exist, but they obey the orders of weightless bits.
Humankind has always been using the outside world has an extension of its intelligence. Social structures on one hand, and artifacts (culture, language, machine) on the other, have always affected our ability to think, improving it more and more, although not always in a linear fashion (paradigm shifts, as Kuhn used to call them, do happen once in a while). “Unlike most animals, man does not adjust to his surroundings but rather transforms those surroundings according to his needs” (Stanisław Lem, Summa Technologiae): indeed, first and foremost, man transforms his surroundings in order to think. Which is exactly what Lev Vygotsky’s and Alexander Luria’s idea of a social brain already foreshadowed.
The inventions of writing, Gutenberg’s printing press, and the digital revolution have all given a powerful boost to this process; today, indeed, we inhabit the so-called infosphere, an environment populated by informative entities. Technology itself continues to be used as a source of add-ons to the basic features nature endowed us with: artificial intelligence itself will hardly ever be of any use should we prove unable to overcome that uncanny mix of fascination and repulsion which fills us up when we fantasize about machines gaining the upper hand over us.
The computer is indeed “the new Golem”, as Giuseppe O. Longo aptly dubs it; yet the worry that artificial intelligence could displace humankind from dominance in our planet is most probably misplaced. AI will benefit us in the same way that previous technologies have: by working hand in hand with us, an extension of our brain (the homo technologicus paradigm), in a smarter and smarter fashion. As Michael I. Jordan, one of the leading figures in machine learning, writes in a short essay:
We need to realize that the current public dialog on AI—which focuses on a narrow subset of industry and a narrow subset of academia—risks blinding us to the challenges and opportunities that are presented by the full scope of AI […]. This scope is less about the realization of science-fiction dreams or nightmares of super-human machines, and more about the need for humans to understand and shape technology as it becomes ever more present and influential in their daily lives.
Even more broadly, and perhaps a bit ironically, the digital revolution is not just about “digital stuff”, at least not in the classical meaning of the word. New organizational structures, new subcultures, such as the open-source communities, are emerging, which are by now only incidentally linked to the software development lifecycle, and are busy pioneering every field of human activity. The traditional boundaries of an individual (and hence of identity) are blurring more and more into something wider than the usual set of physical and social attributes.
The normative attempt at a definition of identity tries to be as unambiguous and general as possible: the ISO/IEC 24760-1:2011 standard (A framework for identity management) defines indeed identity as a “set of attributes related to an entity”. In the context of information technology, this translates into a set of data that define us. Thus, claiming rights related to our identity implies claiming digital rights related to our data.
In Europe, there are important milestones. The so called informational self-determination (or informationelle Selbstbestimmung, in German) is the principle that
[…] in the context of modern data processing, the protection of the individual against unlimited collection, storage, use and disclosure of his/her personal data is encompassed by the general personal rights of the German constitution. This basic right warrants in this respect the capacity of the individual to determine in principle the disclosure and use of his/her personal data. Limitations to this informational self-determination are allowed only in case of overriding public interest.
Thus reads the Volkszählungsurteil, a fundamental decision of the Bundesverfassungsgericht (the Federal Constitutional Court), dated 15 December 1983, whereby the fundamental right to informational self-determination was established as a direct consequence of the general personality right and human dignity (i.e., as a constitutional right). Informational self-determination, ultimately, amounts to “the right of the individual to decide what information about himself should be communicated to others and under what circumstances” (Alan Westin, Privacy and Freedom).
In a press release dating back to 2012, the European Commision established the following definition of personal data:
any information relating to an individual, whether it relates to his or her private, professional or public life. It can be anything from a name, a photo, an email address, bank details, your posts on social networking websites, your medical information, or your computer’s IP address.
At the same time, several key principles concerning personal data (and the relevant digital rights) were stated:
- a single set of rules on data protection must be valid across the EU;
- people must have “easier access to their own data and be able to transfer personal data from one service provider to another more easily” (data portability)
- the right to be forgotten must be granted that “will help people better manage data protection risks online”
- independent national data protection authorities must enforce the EU rules at home.
The General Data Protection Regulation, or, more statutorily, the “Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data”, coming into full force on May 25, 2018, was developed according to such guidelines. It repeals (i.e., replaces) Directive 95/46/EC, also known as the Data Protection Directive, by introducing several important changes:
- increased territorial scope: the GDPR “applies to all companies processing the personal data of data subjects residing in the Union, regardless of the company’s location”;
- penalties: “organizations in breach of GDPR can be fined up to 4% of annual global turnover or €20 Million (whichever is greater)”;
- consent: “must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form”;
- breach notification: mandatory where a security breach is likely to “result in a risk for the rights and freedoms of individuals”;
- right to access: e. the right for data subjects to obtain from the data controller confirmation as to whether or not personal data concerning them is being processed, where and for what purpose;
- right to be forgotten: “entitles the data subject to have the data controller erase his/her personal data, cease further dissemination of the data, and potentially have third parties halt processing of the data”;
- data portability;
- privacy by design: “the inclusion of data protection from the onset of the designing of systems, rather than an addition”;
- data protection officers: “must be appointed on the basis of professional qualities and, in particular, expert knowledge on data protection law and practices”; even more importantly, they “must not carry out any other tasks that could results in a conflict of interest”.
Such fundamentals, although of a technical nature on the surface, have indeed a major and immediate impact on our lives, whilst calling, as I wrote on the aftermath of the Facebook-Cambridge Analytica scandal,
for everybody to acquire a basic awareness of the many interests which underlie the digital revolution, as well as a basic literacy in the tools we can leverage in order to claim the right to our privacy (which ultimately amounts to our identity) and the right to choose exactly what we want to share, with whom, for how long, and how.
Unfortunately, the Polish route for a satisfactory implementation of the GDPR has been quite a controversial one. In January, the Ministry of Digitization (Ministerstwo Cyfryzacji) stated that “it would be difficult for small businesses to give all this information to customers”: referring to article 13’s obligation to tell people where their data are being collected, for how long they will be stored, and what the rights are concerning such data. Moreover, some timeline-fiddling (which, as far as Poland is concerned, is affecting and modifying more than 150 different Acts covering various sectoral regulations) made observers fear “a custom-made implementation”.
On March 28, the Ministry published a new draft, constituting the necessary supplement to the general provisions of the EU regulations. In this document the Ministry appears in favour of creating an independent supervisory body that will manage the challenges posed by the EU. On April 5, finally, the legislative proposal on personal data protection (Rządowy projekt ustawy o ochronie danych osobowych) started its journey through the Polish parliament, and is, at the time of writing, in the process of being transmitted to the President of the Senate (the entire iter legis can be followed through here).
Meanwhile, Poland must overcome its flawed attitude towards basic civil rights. A report on the freedom of press in 2017 by the U.S.-based NGO Freedom House warns of “the government intolerance toward independent or critical reporting, excessive political interference in the affairs of public media, and restrictions on speech regarding Polish history and identity, which have collectively contributed to increased self-censorship and polarization”. And it was not later than in 2016 that Poland’s parliament passed a new surveillance law giving “secret services and police authorities fast access to citizens’ Internet and telecommunication usage data without prior review or approval from a judge”.
|First published on||PoloniCult|
|Archived version||Internet Archive|