Home / Adult / Books / Tech accountability (Dan Shefet)

Tech accountability (Dan Shefet)

with No Comments

 

 

Article source: THE BLUE DOT. 21st Century Learning Spaces. Published in 2020 by the United Nations Educational Scientific and Cultural Organization. © UNESCO MGIEP 2020. This publication is available in Open Access under the Attribution-ShareAlike 3.0 IGO (CC-BY-SA 3.0 IGO) license 


Should Tech be held accountable for the foreseeable misuse of their products?

Dan Shefet, Lawyer at the Paris Court of Appeal

(p. 94-96)

Has the frantic pace of technological progress caused us to arrive at a junction where we need to develop a new philosophical, ethical and legal standard to measure and attribute accountability for the nefarious consequences of tech and especially the internet?

Who is responsible for harmful content and for the use of the net’s infrastructure to peddle illicit goods, facilitate incitement to violence and exchanges between perpetrators of criminal acts? Who is responsible for societal harm caused by unfettered use of AI (leading to bias, inequality and societal disruption)? Obviously authors of iniquitous content and acts are responsible, but what about the companies that develop and commercialize (monetize) the technology enabling and facilitating such crimes?


Should Tech be held accountable…?

 What does this mean in terms of development responsibility for algorithms and applications?

To what extent should Tech be held accountable for war crimes or crimes against humanity if their technology abets incitement to genocide?

Should we include advertisers and other funders in the ambit? Should we, the users, be held responsible for not flagging patent lawlessness? Is there a moral obligation to discontinue the use of reprehensible technology – in other words a moral boycott obligation? These are some of the pressing questions we need to address urgently. The net was and tech have enjoyed unrestrained celebration for the last 25 years. Unfortunately had the above questions been raised at the inception of our current tech revolution the world would most probably have been a much safer and egalitarian place.


It is not too late however, but the conversation can no longer be put off.

Due to the net’s unique combination of scale and third-party use nefarious content involves at least a triangle: The author, the infrastructure provider and the victim. As mentioned above we could also include advertisers, government and users (not an exhaustive list).

Infrastructure providers may be divided into platforms, telecommunications operators and infrastructure owners. Search and bespoke applications may be included in this category for the purposes of addressing the questions raised. For each actor specific accountability standards need to be developed.


In addition, accountability as such should also be broken down into moral, civil and penal liability where different intentional elements apply.

One the most important and challenging problems facing net and tech accountability is that of complicity.

When are the different parties in the triangle complicit in the crime originally committed by the author/perpetrator? Complicity may be willful and thus associate the accomplice with the nefarious objective sought by the perpetrator or it may be knowledge of the crime, willful ignorance, recklessness or negligence in terms of the use made of the means put at the disposal of the perpetrator (for instance a call for violence or harassment made on social media).


Finally, third party accountability (complicity) may be based on a duty of care.

When confronted with demands for accountability, Tech typically responds with two lines of argument which are deemed to justify immunity.

The first line of argument is what we may call the “quantity argument”, it goes like this: We cannot be held accountable for what our users post and how they utilize our technology. There are simply too many users and too much content for us to be able to follow let alone properly analyse and block (if necessary).

This argument is then combined with the second « justification » which is the protection of free speech:

Tech has taken it upon itself to evangelize unrestrained speech to the extent of even – almost – proclaiming that they are not in the business of making money, but bringing speech to oppressed peoples around the world allowing them to attain the highest level of human fulfilment and societal nirvana (in the shape of course of western democratic ideals which in Tech’s self-erected role of crusaders of « good » just happens to coincide with their insatiable business model based on big data and absence of compliance cost and legal consequences).

A closer analysis of these “justifications” will however expose the inherently fallacious reasoning: First of all artificial intelligence can already achieve astounding success in proactively intercepting hurtful content and not only block it before it reaches dissemination and virality, but also identify accounts (if not authors themselves) which repeatedly generate such content and which could easily be closed down.

In addition the quantity argument is essentially flawed because success can never justify harm!

Imagine if it was the drug industry. There would be a public outcry in case toxic drugs were sold by any manufacturer who in its defense would argue that its quality control system could not keep up with the enormous demand.

Success could never justify harm.

As far as the second argument is concerned (“The Knights of Free Speech”), Tech has been very successful in manipulating public opinion by spinning a narrative that free speech necessarily means that they should be shielded against accountability by strong immunity laws.


Clearly this is but mythology devoid of any intrinsic truth: Free Speech has become the one single human right that is most often trampled upon in order to attain political or economic control.

Free Speech was never meant to allow incitement, defamation and manipulation.

Free speech deserves better than Tech’s self-anointed messianic role. It is one of our most critical foundational human rights and Tech’s dilution of it is shameful.

The time has come for the international community to develop a comprehensive standard of accountability which will allow us all to reap the benefits of the empowerment and inclusiveness promised by Tech at its nascency.

How can the dogma of Free Speech ever justify doing harm with “actual knowledge”[?] Time and again do we see the Tech Titans acquire actual knowledge of toxic content (either because they intercept it themselves or because it is flagged by users or governments), but they rarely take appropriate action. They even often decide not to take it down, arguing that they are not the judge and in any case immune to such obligations (thereby invoking especially a shield under US law – the infamous Communications Decency Act section 230 C).

It is intolerable that anybody – whether “immune” or not – should be allowed to continue to cause harm when acquiring actual knowledge simply by referring to some sort of law which from a technical point of view may shield them against civil or criminal action.

Actual knowledge amounts to an intention to associate yourself with the crime if action against it may be easily taken – yet not taken.