TEORIJA IN PRAKSA let. 56, 3/2019 779 Artur de MAtos Alves* THE UTOPIA OF TRANSPARENCY: PLATFORM HUMANISM AND TECHNOCRACY IN FACEBOOK’S TRANSPARENCY DISCOURSE Abstract. this article introduces the concept of ‘platform humanism’ to examine the discourse on communica- tive transparency and social accountability adopted by online intermediaries and service providers. By critical- ly analysing Facebook’s adoption of humanistic themes in the context of its corporate communications along with more opaque social responsibility practices, this article intends to provide empirical and critical insights into the political economy of online platforms. It iden- tifies the transparency discourse as a key ideological element of the digital platform economy, characterises Facebook’s transparency discourse as being outwardly dominated by a humanistic communitarian terminol- ogy, while relying heavily on quantification, arguments for self-regulation, and implied consent practices. Keywords: transparency, online service providers, com- munication, discourse analysis, Facebook, techno-uto- pia, online platforms Introduction “Transparency” has become a catchall term for a range of practices per- taining to accountability in governance. Public and private institutions alike are concerned with public image and messages surrounding their ‘brands’. Government accountability and anti-corruption measures promoted in the name of greater transparency are in turn presented as a gain for democracy. Multinationals tout their availability to self-regulate or to provide transpar- ent information about their processes as protections for their clients and as proof of their socially responsible operations. Structured rules for accounta- bility and oversight in public and private governance, often as social respon- sibility practices, have been introduced at the national and international lev- els. The technology sector and in particular online service providers (OSP) have adopted transparency as a core element of their social responsibility * Artur de Matos Alves, Phd, Professor, telUQ, Université du Québec, département des sciences humaines, lettres et Communications, Québec, Canada. Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 780 discourse in which the notion is linked back to the utopian representations of the digital world prevalent in the early stages of massification of the Internet. In the context of the current worldwide debate on Internet regula- tion and governance, casting a critical light on the discourse of transparency allows a better understanding of how it is deployed in order to delineate positions regarding the future of the digital world. In recent years, transparency has been a defining feature of the trust model put forward by OSPs as they face both governmental and consumer pressure to be more forthcoming in their approach to user data and sur- veillance. The transparency discourse simultaneously allows OSPs to offer users a measure of reassurance as to the protection of their rights, and to address regulatory and other risks pertaining to their corporate image and branding (arising from public scandals, for instance). Facebook is of inter- est in this regard since it has been in the public eye for the last decade as a fast-growing online intermediary with a vast public and the corresponding responsibility for its users’ data. Further – as highlighted by studies of the political economy of online communication (Bilić, 2018; Mosco, 2009; Dijck, 2013; Fuchs, 2017; Sandoval, 2014) – the company reveals the challenges in managing the risks that arise in digital communication (namely security, privacy and consent issues, but also risks stemming from systemic moral hazards such as content management practices and information manipula- tion). These risks are typically addressed by the company (and other OSPs) through the quantified disclosure of data, which is (as will be seen below) more akin to a procedural self-regulation than to a significant engagement with the concerns of users and the online commons. This article introduces the term “platform humanism” to refer to the construction of Facebook’s discourse surrounding the language of human rights and privacy. “Platform humanism” points to the somewhat paradoxi- cal – walking a line between humanistic mission and commercial power – discursive nexus deployed by OSPs and digital platforms in which the users’ rights and interests are presented as fully aligned and protected by the OSPs themselves against the abuses of governments and the overreach of regula- tion. By critically analysing the ‘platform humanism’ discourse in the con- text of the corporate communications and social responsibility practices of OSPs (specifically Facebook, as one of the most successful and far-reaching companies in the digital world), against its ideological and socio-political backdrop, this article intends to provide empirical and critical insights into the political economy of online platforms. It specifically identifies the trans- parency discourse as a key element of the ideological framework for the digital platform economy. Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 781 Theoretical framework This article posits that the concept of transparency applied by Facebook in its reports and practice enacts both its strategic priorities and its analysis of the context’s communication challenges. The practice of transparency con- sists of (institutional) behaviours putting this representation and conceptual work into practice. Thus, transparency discourses constitute an apt object of analysis to understand the ideological dimensions of transparency. One can analyse the background features and highlight the presuppositions of the representation by looking into the choice of procedures, their reach and the discursive presentation of the agents. In order to examine the discursive practices of Facebook as a service provider and platform, and to assess their ideological elements, this research relies on multimodal critical discourse analysis, also known as MCDA (Machin and Mayr, 2012; Fairclough, 2012; Mayr, 2008; Van Dijk, 2005). MCDA is particularly useful for disclosing rele- vant discursive and rhetorical choices in verbal and non-verbal communica- tion. Critical discourse analysis attempts “to ‘denaturalise’ representations” (Machin and Mayr, 2012: 9) by unveiling their effects through the study of connotation, implicature, metaphor, modality and iconography in the target texts. CDA aims to uncover the ideological and political construction of rep- resentations by social actors – meaningful choices expressing underlying structural conditions – through both normative and explanatory approaches (Fairclough, 2012: 9). It also investigates how discourses are operationalised and enacted in the world via concrete actions and initiatives, or via interac- tional affordances (ibid.: 12). CDA contributes to the critical contextualisa- tion of the disclosure of data by attempting to understand how disclosure and non-disclosure relate to the prevailing socio-political conditions as well as to the enunciator’s self-interest, and therefore cannot be taken as a simple reflection of intentionality or truth (Maingueneau, 2014: 58). It thus consid- ers which data are disclosed and emphasised, in what manner and on which conditions (Van Dijk, 2005: 32). The corpus of the analysis presented in this article comprises trans- parency reports and public statements published by Facebook and its representatives between 2013 and 2018. The main source is Facebook’s “Newsroom”, the company’s official corporate weblog. This choice is justi- fied in the CDA context since it presents an opportunity to study Facebook’s discourse in its diachronic and synchronic dimensions, that is, in both its lexical and ideological evolution, and in its dialectical relation with ongoing debates on matters of governance and social responsibility. In other words, it is in the weblog that the company’s transparency discourse is presented. In “Newsroom”, the keyword “transparency” appears 80 times between 2009 and December 2018, more prominently since 2016. The keyword Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 782 “consent”, on the other hand, appears in just 13 search results (11 of which are since 2017). These 93 blog entries constitute the main corpus of this analysis of Facebook’s transparency discourse, along with the company’s Transparency Report and Community Standards portals. The portals were also analysed in their data-driven and iconographic dimensions, as well as in their role in operationalising the discourse set out in the weblog. Other elements drawn from media sources were helpful in framing and contextu- alising the main corpus – reports, interviews and statements from the com- pany and its representatives. The aim is to present a focused critical analysis of Facebook’s trans- parency discourse, particularly its political and ideological dimensions as symptoms of the company’s policy and economic strategies. Transparency discourse is taken to be a form of communication as well as of world-mak- ing. It exhibits formal characteristics (disposition, coherence and articula- tion of materials) along with verbal (statements, comments, explanations, interpretations in written and spoken form, along with datasets) and non- verbal elements (graphs, images, pictures and tool design). The concepts of transparency outlined and compared in the following section shed light on how transparency can be interpreted as a form of mediation allowing both disclosure and concealment in communication. Online transparency and social responsibility There is a vast body of literature on the role of transparency as part of the larger issues of the social responsibility of online service providers as custodians of the online sphere. Issues of social responsibility, intellectual property, privacy and the public interest are addressed as matters of regu- lation and governance, without separating social responsibility and trans- parency – in their legal and economic aspects – from the context of com- munication and the concentration of power and intellectual property on the global scale (Kohl, 2012; Fenster, 2006; Sandoval, 2014; Smyrnaios, 2016; Gillespie, 2010; Perset, 2010). In order to frame the discussion of online ser- vice providers, their importance to global communication and the political economy of content and advertising industries, researchers have addressed different aspects of the roles OSPs play in this system. For example, Sarah T. Roberts refers to “commercial content moderation” to underline the often- underplayed editorial role of OSPs as well as their use of human alongside algorithmic moderation processes (Roberts, 2018, 2016). Others point to the surveillance apparatus put in place for commercial purposes (Zuboff, 2015; Trottier, 2016), which has strengthened the overall trend towards the generalisation of surveillance technologies (Gandy, 1989; Lyon, 2005). OSPs have also been studied as the main actors in the design and development Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 783 of the “platform economy” or the “platform society” (Gillespie, 2010; Dijck, Waal and Poell, 2018; Williams, 2015) and the techno-social assemblage to which algorithmic governance and Big Data have given rise in the last dec- ade (Diakopoulos, 2014; Rahwan, 2018). Emphasis has been placed on the rhetorical and ideological aspects of the platform economy (Pozen, 2018; Karatzogianni and Matthews, 2018; Dijck, 2013). The ideology of transparency has been shown to play a major part in this restructuring of the online sphere, marked by a transition to algorith- mic accountability and surveillance as governance strategies (Ananny and Crawford, 2018; Rouvroy and Berns, 2013). In this context, transparency is analysed as data disclosure or demonstrative activity in the context of the platform economy, with the product of that activity of disclosure lacking the ethical coherence of communicational reciprocity or consent (Beer, 2015; Bilić, 2018; Noble, 2018; Roberts, 2018). Coupled to this is the presup- position of a specific type of audience – one with the literacy and knowl- edge to engage with the ideological underpinnings as well as the quanti- fied element (Introna and Nissenbaum, 2000; Turilli and Floridi, 2009; Christensen and Cheney, 2015; Muir, 2015). Transparency-as-ideology also figures prominently in Breton’s critique of the “utopia of communication” (Breton, 1997). Different aspects of this idea emerge from the changes to communication technologies brought by individualised mass communica- tion: direct electronic democracy and accountability; global citizenship and responsibility; freedom of information; freedom of expression and of iden- tity; the emergence of individualised media; instantaneous communication across the globe; finally, political and economic transparency in the global system. Olivier Aim points out that transparency is a perceived feature of the Internet as a technology, of the content of communications through that medium, and as an ideological substrate already in place in parliamentary democracies and market capitalism (2006: 33). Ideologically, then, the dis- course of transparency has utopian overtones and is linked to the strong impact of information and communication technologies in the last 50 years, notably as a ready-to-use ethos for public consumption.1 Amitai Etzioni shows that the prevalent definition of transparency is lim- ited to information disclosure to the public by an institution (2018: 182). 1 the strongest criticisms of transparency target a radical version of transparency. At the core of this criticism lie are the ideas of self-disclosure and quantified measurements, seen as symptoms of weakened political, communicational and social bonds. In particular, transparency is regarded as eroding privacy and trust, fostering insincerity and voyeurism alike (Beygaert, 2011; Gallot and verlaet, 2016). Byung- Chul Han denounces transparency as a false levelling ideal of conformity setting itself against the princi- ples of social trust and the inherent ambivalence of trust in communication and political processes (2015: 2–5). In this sense, transparency infiltrates social life, encouraging risk-aversion (controlling perception and opinion) and short-term techno-solutionism (using quantified data, communication technologies and ancillary resources to signal transparency and sincerity). ‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬‬ Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 784 In his own distinction between informal (“communitarian”) and formal (“social”) transparency, the former is voluntary and relies on sharing infor- mation, potentially with added interpretive elements calling for communi- tarian engagement in governance (Etzioni, 2010: 395). Formal transparency, contractual in nature, can be equated with a regulatory effort aimed at a spe- cific shared understanding of the social role of institutions within a given socio-political model (for example, democratic governance and supervi- sion). Whereas communitarian transparency retains the affective dimen- sions of trust and consent, its formal counterpart is mandated and enforced (2010: 391). In this sense, strong transparency is necessarily linked to exter- nal regulation. For Etzioni, the fact that regulation (and strong transparency as opposed to self-regulatory) has fallen out of favour is indicative of the genealogy of transparency and its ideological content. The transparency ideal fits with an atomistic view of society in which the empowerment of the individual keeps in check other political and economic power. Regulation is undesirable in this perspective because of the perceived delegation of deci- sion power from the individual to the state (Etzioni, 2018: 186–87). In short, being transparent is presented as being superior to being subject to regu- lation, and self-regulation is equated with the performative dimension of transparency. For example, insofar as transparency practices do not ascribe responsibility or sanction institutional behaviour, they provide no insight as to “how they make decisions, or the results of their actions” (Fox, 2007: 667). Extensive quantitative datasets, along with discursive framing, are employed as sense-making (Hansen et al., 2015). This leaves open the question of the value of information for individual or collective action, as opposed to the regulatory and enforcement approach (Etzioni, 2018: 188–96). A noteworthy criticism of OSP transparency practices addresses the quantified character of the process: transparency is not held as a commu- nicational value, nor as a property of the relation between the parties, but as a linear process of transmission to which the notion of responsibility (and, therefore, asymmetry) is ideologically attached. This communicational linearity of transparency initiatives such as reporting and data rendition provides very little by way of feedback mechanisms between stakehold- ers (Fenster, 2006: 892–893, 2015: 153; Ananny and Crawford, 2018: 983). In reality, Fenster argues, these limitations arise from a cybernetic view of transparency (2015: 152) and the notion of a rational, information-process- ing public in the era of communicative, platform or surveillance capitalism runs counter to the fact of the saturation of the public sphere with informa- tion. This saturation, even if it does not entirely contradict the possibility of a rational deliberative online sphere, strains the belief in an ideal public capable of decoding and acting (ibid.). In short, most transparency-based communication is transmissive rather Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 785 than dialogical. A model of communicative transparency would require a transactional process and even a dimension of involuntary transparency (whereby information is forced out into the open) facilitated by regulatory measure and public oversight. Ultimately, conflating transparency with sur- veillance or control assumes the existence of an audience and a guiding interpreting narrative (Beygaert, 2011: 63), as well as a shared performative stance (Gallot and Verlaet, 2016: 214). Status differentials are thus unavoid- able (for example, between consumers and corporations, citizens and gov- ernments, spectators and actors). The adherence to the norms of transpar- ency generates a performance which, while mediated by the same social norms and possibly accelerated by the use of technologies, does not pro- vide a radical form of visibility. Transparency does not automatically lead to a levelling of social relations or the universal disclosure of secrets. The genealogy and rhetoric of Facebook’s approaches to transparency This section analyses Facebook’s discourse on transparency over the years. Examples are drawn from the analysis of the Newsrooms weblog and the Facebook Transparency Reports (FTR) pages. These sources, along with public statements and media reports, illuminate how the company constructs its social responsibility and, in particular, how it shapes ‘transpar- ency’ in a quickly mutating context. From its inception at Harvard University as TheFacebook, and its early predecessor Facemash, the platform seemed to rely on the users’ willing- ness to assent to a simplified view of sociability and consent. Paramount to this view was the users’ trust that their data would be safely kept in a ‘neutral’ digital communication platform. This notion of transparent custo- dianship has been the object of several inflections. The first initiative may be seen as part of the company’s effort to be transparent about its legal obliga- tions to disclose user data to the state (law enforcement and surveillance agencies, as well as the courts). Facebook Transparency Reports (FTR)2 are available at a dedicated site, as are their Ad tools and Community Standards (Sonderby, 2017; Facebook, 2018c). Data on government requests are avail- able since January 2013 and published every semester. In 2017 and 2018, the company was challenged over multiple revelations about the platform’s 2 Interestingly, this page was called “Government requests” until 2017. As seen below, the title change is clearly related to a shift in focus away from surveillance issues and towards advertising source disclosure. Contrary to Google whose transparency reports date back to 2011 and have since become more detailed (Alves, 2016), until the change in designation and corresponding shift towards “reinforcing [Facebook’s] commitment to transparency”, the data released only pertained to Facebook responses to those requests (sonderby, 2017, 2018). Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 786 role in allowing the spread of fake news and the undue use of user data by third parties before and during the 2016 Brexit referendum and the US presidential election (Cobbe, 2018; Hindman, 2018; Koebler, 2018). Facing close scrutiny of its business model, Facebook initiated a privacy and trans- parency overhaul. The company published the platform’s Community Standards and, in a separate part of the FTR, disclosed the related enforce- ment numbers and procedures (Facebook, 2018b,a). It is particularly inter- esting to note the recontextualisation operation for the name “Community standards” – a set of norms and rules drafted behind closed doors with minor user input. Facebook published the full community standards and content moderation policies in April 2018 (Facebook, 2018b), 11 months after they were leaked to the Guardian newspaper (Hopkins, 2017). Facebook’s discourse revolves around the word “community”. Content standards and reporting tools are presented as being community-oriented in both their wording (“Community standards”) and operation – for exam- ple, community reporting and flagging are tools available to users to indi- cate to Facebook content violating the terms of service or content policies. Facebook’s transparency discourse strongly suggests adherence to an infor- mal, communitarian view of transparency, to use Etzioni’s terms. In practice, however, the company relies on the quantification of formalised processes of transparency – in both the case of external demands for user information and for platform content reviews. It is significant that the company insists on presenting itself for its homogenous sociability on the global scale. In fact, the abstraction “com- munity” obscures the processes that shape decisions about platforms, while displacing responsibility onto a shared common space that is largely deter- mined by the company.3 Until recently, the distinction between advertising and other targeted paid content was particularly opaque, and voluntary dis- closure by Facebook relies on the creation of quantification tools and data- bases that require significant interpretive work by the user in order to be effective as transparency tools (Facebook, 2018b). Transparency initiatives have been largely reactive. They are course- corrections, continuing the pattern of a mismatch between the ‘community’ discourse of platform humanism and accountability to society. In the wake of the 2016 US elections, the company ran a campaign to contain and con- trol the narrative of transparency in order to shape it as a public relations initiative. The recent transparency initiatives have followed the Cambridge 3 Facebook’s discourse acknowledges neither the critical distinction between platform and commu- nity nor the mediating role of the technosocial assemblage in the creation of a multiplicity of spaces and shared experiences (Feenberg and Bakardjieva, 2004; Alves, 2016). the concept of community is polyse- mic and its applicability to online networks has been criticised in scholarship (for example, Barney, 2004; Couldry, 2014). Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 787 Analytica scandal, the #metoo movement, and external revelations (by whis- tle-blowers and media reports) of the failure to address concerns about paid false advertising and misinformation (Gillespie, 2018; Cobbe, 2018; Fisher, 2018; Angwin and Grassegger, 2017; Tobin, Varner and Angwin, 2017). The company has attempted to persuade the public and the governments that its internal governance mechanisms are sufficient. Mark Zuckerberg, the founder and CEO, appeared before parliamentary commissions in the USA and the EU, after having issued multiple public apologies for the platform’s role in spreading disinformation (Roose and Kang, 2018; Satariano and Schreuer, 2018). The course-correction measures put in place between 2017 and 2019 included the above-mentioned publication of community stand- ards and related enforcement metrics. Another innovation was the creation of a public, searchable database of political advertising on the platform, which included sources of funding, spending, and the ads themselves. The company also introduced new features allowing users to identify the ori- gin of the sponsored content in their feeds. Independent media and activ- ist organisations attempted to shed light on political advertising by intro- ducing browser plugin-based ad transparency tools. However, Facebook’s changes to its code made at the time of introducing the new transparency features have blocked these tools, thereby neutralising a form of independ- ent verification of its ad transparency initiatives (Merrill and Tobin, 2019). This is noteworthy and characteristic of the contradictions between plat- form humanism discourse and action. Even if the company has accrued power and users, its failure to fathom the relevance of the platform in public communication is still quite clear in its discourse. In fact, the content of the justifications and apologies has changed little since 2011, in what amounts to a formula:4 overall, I think we have a good history of providing transparency and control over who can see your information. that said, I’m the first to admit that we’ve made a bunch of mistakes. In particular, I think that a small number of high profile mistakes, like Beacon four years ago and poor execution as we transitioned our privacy model two years ago, have often overshadowed much of the good work we’ve done. (Zuckerberg, 2011) The first admission by the CEO was followed by multiple transparency initiatives, including the “Government Requests” website (in 2013), as well as 4 A formula, according to Krieg-Planque, is a “mandatory passage” through which actors build “con- sensus and conflict”, which reflects different attributes and values in the positions they which to adopt or reject (cit. in Maingueneau, 2014: 98–99). Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 788 significant changes to the platform regarding privacy options. Nevertheless, the company became further embattled as its content management strate- gies and business model led to a proliferation of exploitative content prac- tices, from undisclosed social experiments on the platform, clickfarms and clickbait sites, through to targeted disinformation and opaque political advertising (Tobin, Varner and Angwin, 2017; Valentino-DeVries, Larson and Angwin, 2017). In 2018, in a Q&A session with the CEO published in “Newsroom”, Zuckerberg addresses the issues of corporate and personal responsibility in much the same language: We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm as well. that goes for fake news, foreign interference in elections, hate speech, in addition to devel- opers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake. It was my mistake. (Facebook, 2018d) Similar formulas appear multiple times across weblog entries over the years. The first is that of a “commitment to transparency”, appearing in two identically entitled posts in the “Newsroom” weblog in 2017 and 2018 (“Reinforcing our commitment to transparency”), and a third one entitled “Our continued commitment to transparency” in November 2018 (Sonderby, 2018b). This expression of commitment is formulaic to the point of being repeated in the last paragraph of similar weblog entries (Sonderby, 2017, 2018a). Another instance is the expression of concern about Internet and service disruptions. Both occur whenever a new transparency report or initiative is introduced, suggesting an administrative logic requiring not just consistent rhetoric, but also a distancing from the turbulent politics of plat- form transparency. Yet another significant recurring formula is the affirma- tion of the commitment to “reform surveillance in a way that protects their citizens’ safety and security while respecting their rights and freedoms” (Sonderby, 2016). A common formula until recently, it is entirely absent on the two most significant documents on Facebook’s transparency policy in the corpus, pointing to a change in focus on transparency efforts since the Snowden revelations brought social-media-enabled surveillance to the fore. The front page of FTR has changed over time. Currently, an embedded comparison or data visualisation functionality is exhibited on the front page, but is somewhat limited beyond that. The spreadsheet files are avail- able for download for each report semester and do not contain all years for comparison or parsing. A FAQ section, along with links to other trans- parency reports, policy commitments, and other initiatives, like the “Global Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 789 Government Surveillance Reform”5 (GNI) are provided on the front page. The most recent example of a self-regulation initiative is the proposed creation of a council that Zuckerberg referred to as a “supreme court”-like structure, tasked with applying the “community guidelines” (Klein, 2018). Both the council and the guidelines are to be picked by Facebook itself, although coupled with promises of “independent governance and over- sight” (Zuckerberg, 2018). Platform humanism as a strategic discourse and risk-management The previous section provided several important instances where Facebook has attempted to reconstruct or reinforce its discursive commit- ment to transparency. When considering the power asymmetry between the corporation and its users, as well as the effects of the company’s opera- tion in the global public sphere, it is reasonable to adopt a critical distance towards two key dimensions of ‘transparency’. It is crucial to interrogate the idea of transparency put forward by the platform (and embedded in it as part of its techno-social assemblage), as well as the model of communica- tion underlying the mode of relationship between the company, the users, and the socio-political contexts. The expression “platform humanism” addresses in an ironic way the fact that OSPs often choose to simplify transparency by presenting it as a matter of advocacy for user rights. As shown above, Facebook rhetorically aligns itself with the interests of the users (in matters of human rights and pri- vacy, for example), while both lobbying for deregulation and avoiding any engagement with serious concerns about their actions and business models. This illustrates the extent of the epistemological and ethical problem in the technologically mediated communication sphere (the sources of trust and accountability links between actors), as well as the existence of a material or practical problem (how to maximise the benefit of transparency practices for technological citizenship, or even defining a working set of guidelines). Facebook’s example shows three features that allow us to identify its trans- parency discourse with a techno-utopian ideology, stemming from a reac- tive posture: quantified proceduralism, self-regulation, and implied consent. Quantified proceduralism is defined here as reliance on internal pro- cesses and quantification for processing requests and delivering informa- tion. Either in the shape of giving more access to internal information, or of new norms to apply in self-accountability procedures, with minor changes 5 this is a coalition of companies (including, among others, Apple, Facebook, Google, Microsoft, and twitter) advocating for changes in surveillance laws and practice, guided by the principle that “govern- ment law enforcement and intelligence efforts should be rule-bound, narrowly tailored, transparent, and subject to strong oversight” (Reform Government surveillance 2018). Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 790 the responses keep in place surveillance and data capture systems with little oversight and user participation. This trend was present when OSP transpar- ency hubs and portals first appeared: the drive for radical transparency and democratic accountability promoted by Wikileaks, but also by Silicon Valley companies, political actors and activists, put intermediaries in the spotlight as political actors and enablers of political action. Data production and quantification are the first of the tenets of the Santa Clara Principles, and the one most easily followed by content moderators (Santa Clara Principles on Transparency and Accountability in Content Moderation, n.d.; Hern, 2018). As shown above, a critical appraisal of transparency hinges on the analy- sis of discourse and practices, and not necessarily on the definition of the concept as stated in the transparency reports of OSPs. From a phenomeno- logical standpoint, a device is transparent insofar as it is unobtrusive and retains correspondence to its use. Conversely, the device is opaque when- ever it becomes defective, absent or if it challenges the activity in which it is involved. TR reports, specifically, are presented as aiding transparency in as much as they illuminate the platform’s governance and allow users a measure of participatory clarity. Unveiling to some degree their decision processes regarding surveillance activities, advertising processes, interface design, content and algorithm management, OSPs allay concerns and reduce social risk in the short term. However, this method of transparency also introduces an element of obtrusiveness (discussed by Etzioni) that, far from regaining social trust, maintains self-regulation as an unexamined prem- ise, prompting a further cascade of initiatives to demonstrate and perform ‘transparency’. Again, if the transparency discourse appears to be formed around the promotion of humanistic values, its practical effects obtrude on the platform, and lay bare the difficulty – and the social and political risks – of attempting to mediate sociability on such a large scale. self-regulation. Besides its reactive character, corporate transparency is a largely unsupervised and unregulated subset of corporate social respon- sibility practices. Unlike governmental transparency, which is connected to political legitimation processes and institutionalised as part of account- ability systems or economic norms, corporate transparency communica- tion practices stem from the imperatives of self-regulation and branding. As corporations attempt to control messaging and perceptions, transpar- ency practices respond to the need for image and brand control, indicating the need for just-in-time communicative regulation of discourse. In other words, the periodical curated release of statements and of quantified data on transparency responds to the need for adjustments to society’s percep- tions of the role of OSP and platforms in the lifeworld. As the perceived influence of these companies in the political, economic, social and cultural spheres expands, so too does public scrutiny. Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 791 Facebook has retained a single discursive position in order to address the perception of social risk arising from its mode of operation and busi- ness model: self-regulatory measures are sufficient to address public con- cerns and to restore trust in the internal review processes for both govern- ment requests and the content circulated on the platform (whether coming from individuals or paying advertisers). The deployment of community standards and creation of human and automatic review systems exemplify this drive to reduce the company’s exposure to financial and political pres- sure. Facebook’s transparency-based discourse highlights self-regulation and internal auditing mechanisms falling short of supervision or democratic accountability through regulation channels. Implied consent. As transparency refers to the invisibility of mediation, or a seamless access to the functional goal of the interaction, it is a desir- able property for mediating apparatuses, and especially so in the realm of transmission of information: frictionless communication does away with noise or interference. An interface or mediating tool getting in the way of communication, or adding a source of noise, is altogether contrary to the ideal of technical transparency as found in the above-mentioned “utopia of communication”. Hence the need to ensure that interfaces not only allow a seamless experience, but also that their design does not subvert this techni- cal transparency. The interface (usually placed in the mediating position of a platform or intermediary) is, in fact, a dynamic part of communication. Much of the dis- cussion on transparency is thereby akin to a debate on ethics and meta-com- municative practices: which communicative stances (forms of exchanges of meaning, from metadata acquisition, personal interaction, to commercial exchanges) do the mediating interfaces allow when used as supports for communication, and how do different actors (visible or not) access those possibilities? In fact, power asymmetries in the interaction with and through the interfaces pose significant obstacles to a naïve notion of transparency, and raise the matter of the value of consent given through the checkboxes and jargon-filled terms of service typical of these services. The differences in power, in status and in access among the actors in shaping the platforms negate the possibility of immediate transparency. Transparency initiatives are valuable. They illuminate how commercial content management operates through datafication and commercialisation, how it depends on the streamlining and quantification of online human interaction, and the extent to which the economic and political realms now rely on the automated, objectified public sphere for surveillance, politi- cal communication, propaganda and advertising. By making these link- ages clearer, and bringing to light the mismatch between their discourse, their practices, the public interest, and the well-being of their users, OSP Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 792 transparency initiatives enable a form of transparency and provide a valu- able, albeit limited, awareness of the political economy of platforms. The drive for OSP transparency may be understood in the context of the utopia of a democracy-expanding Internet. The stated goal of democratic transparency in “platform humanism” cannot be fully achieved through self- regulatory, performative, quantified, controlled corporate communication because transparency is also a communicational problem. Transparency hubs offer a stage where the discourse of transparency is enacted in a dynamic portrait of the interaction of online platforms with social and polit- ical institutions. These stages are uniquely suited to testing the ebb and flow of the discourse of openness and democratisation of the online sphere, and the efficacy of the underlying model of disclosure and transmission of quantified data in enacting transparency. Understanding transparency as a linear or self-disclosure process alone encourages approaches based on procedural quantification and implied consent. Facebook’s pattern of behaviour is similar to the reaction of other businesses to revelations about their failures to address risk in spite of repeated failures to comply with legal and ethical obligations. Notwithstanding the relatively strong regula- tory oversight of banking, insurance, pharmaceutical and medical technol- ogy, or auto-making companies, the abundance of examples from the last 5 years alone shows that even institutionalised transparency requirements are difficult to implement and supervise.6 Conclusion This article explores Facebook’s approach to transparency and social responsibility in the last half decade. I introduce the term “platform human- ism” to highlight that, far from being a mere product of public relations jar- gon, the concept of transparency is deployed as part of a certain approach to regulating online speech and the political economy of the Internet. This article aims to contribute a new angle to the critical analyses of the current state of online communication by focusing on the online transparency dis- course and practices of Facebook. It also addresses certain ethical and polit- ical challenges of self-regulatory approaches in transparency practices by describing the contradictions of platform humanism. This article has shown some of these contradictions in Facebook’s case, in instances of the con- struction of a formulaic human-centred public discourse on hand, and a set 6 For example, the health technology company theranos was shown to have attempted to avoid supervisory efforts. the volkswagen group was found guilty of having developed technologies to subvert gas and particle emission tests of its engines. Boeing’s internal certification practices have been signalled in the context of the failures leading to two crashed 737 MAX planes. Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 793 of practices marked by quantified proceduralism, an insistence on self-regu- latory approaches and implied consent, on the other. The way in which transparency is enacted by Facebook is largely reac- tive. The timing of the initiatives taken map onto specific global events, usually involving the most powerful online platforms, and especially pub- lic revelations or scandals which elicit social and political responses. These responses have hitherto taken the twin forms of communication crisis-man- agement initiatives and of additions to the platform (for example, transpar- ency hubs, portals and applications).7 Despite the limitations of transparency as deployed by Facebook, this critique does not find it to be without value. OSP transparency reports illustrate the work being done to assert and expand users’ rights to infor- mation about the services they use, and the challenge of providing them with a clear picture of the conflicts in the organisation and governance of the online ecology. Intermediaries are increasingly being held responsible for the commodification of online discourse and for the cooptation, if not the decay in quality, of the most readily available information on their plat- forms. The political economy of the online world has crystallised into a con- flict between the communicational and democratic need for transparency on one side, and the harnessing of information for automated exploitation, in which commercial concerns take precedence over public accountability, on the other. Facebook’s transparency model neither provides sufficiently robust responses to legitimate questions about the sources of information and funding in its services, nor does it allay doubts about the algorithmic neutrality of the corporate online sphere. BIBLIOGRAPHY Aïm, Olivier (2006): La transparence rendue visible. Médiations informatiques de l’écriture. Communication & Langages 147 (1): 31–45. Alves, Artur Matos (2016): Online Content Control, Memory and Community Isolation. In: Athina Karatzogianni, Elisa Serafinelli and Dennis Nguyen (eds.), The Digital Transformation of the Public Sphere, 83–106. London: Palgrave Macmillan UK. 7 We propose three phases in the construction and restructuring of discourses on transparency: the first roughly corresponds to the period 2010–2012, during the events of the Arab spring and after the famous “Remarks on internet freedom speech” by Hillary Clinton, then Us secretary of state (Clinton, 2010). osPs notion of transparency focused on Internet disruptions, dissent suppression trends, and state overreach. the second phase can be mapped onto the snowden revelations of 2013. osPs concentrated on disclosing state surveillance and the extent of cooperation with intelligence services. A third rupture occurred in 2016 with the Brexit referendum in the United Kingdom and the Us presidential elections. In this case, discourses on transparency shifted towards misinformation, content moderation, and paid politi- cal advertising. Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 794 Ananny, Mike and Kate Crawford (2018): Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability. New Media & Society 20 (3): 973–89. Beygaert, N. (2011): Le Panoptique Participatif Ou La Transparence Imposée. Revue Nouvelle (12): 60–65. Bilić, Paško (2018): A Critique of the Political Economy of Algorithms: A Brief History of Google’s Technological Rationality. TripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 16 (1): 315–31. Breton, Philippe (1997): L’ utopie de la communication: le mythe du village plané- taire. Paris: Éd. La Découverte. Christensen, Lars Thøger and George Cheney (2015): Peering into Transparency: Challenging Ideals, Proxies, and Organizational Practices. Communication Theory 25 (1): 70–90. Dijck, José van (2013): The Culture of Connectivity: A Critical History of Social Media. Oxford/New York: Oxford University Press. Dijck, José van, Martijn de Waal and Thomas Poell (2018): The Platform Society: Public Values in a Connective World. Oxford, New York: Oxford University Press. Etzioni, Amitai (2010): Is Transparency the Best Disinfectant? Journal of Political Philosophy 18 (4): 389–404. ———. 2018. The Limits of Transparency. In: Emmanuel Alloa and Dieter Thomas (eds.), Transparency, Society and Subjectivity: Critical Perspectives, 225–38. Cham: Palgrave MacMillan. Fairclough, Norman (2012): Critical Discourse Analysis. In: J. P. Gee and Michael Handford (eds.), The Routledge Handbook of Discourse Analysis, 9–20. Abingdon: Routledge. Fenster, Mark (2006): The Opacity of Transparency. Iowa L. Rev. 91: 885–949. Fuchs, Christian (2017): Social Media: A Critical Introduction. London: SAGE. Gallot, Sidonie and Lise Verlaet (2016): La transparence: l’utopie du numérique?‪ Communication & Organisation, (49): 203–217. Gandy, Oscar H. (1989): The Surveillance Society: Information Technology and Bureaucratic Social Control. Journal of Communication 39 (3): 61–76. Gillespie, Tarleton (2010): The Politics of “Platforms”. New Media & Society 12 (3): 347–364. Han, Byung-Chul (2015): The Transparency Society. Stanford: Stanford University Press. Introna, Lucas D. and Helen Nissenbaum (2000): Shaping the Web: Why the Politics of Search Engines Matters. The Information Society 16: 169–185. Karatzogianni, A. and J. Matthews (2018): Platform Ideologies: Ideological Production in Digital Intermediation Platforms and Structural Effectivity in the “Sharing Economy”, Television & New Media (November). Accessible at https:// doi.org/10.1177/1527476418808029, 19. 1. 2019. Kohl, Uta (2012): The Rise and Rise of Online Intermediaries in the Governance of the Internet and Beyond – Connectivity Intermediaries. International Review of Law, Computers & Technology 26 (2–3): 185–210. Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 795 Lyon, David (2005): Surveillance as Social Sorting: Computer Codes and Mobile Bodies. In: Surveillance as Social Sorting, 27–44. Routledge. Machin, David and Andrea Mayr (2012): How to Do Critical Discourse Analysis: A Multimodal Introduction. Los Angeles: SAGE. Maingueneau, Dominique (2014): Discours et analyse du discours. iCOM. Paris: Armand Colin. Mayr, Andrea (2008): Language and Power: An Introduction to Institutional Discourse. London; New York: Continuum. Mosco, Vincent (2009): The Political Economy of Communication. 2nd ed. London: SAGE. Muir, Lorna (2015): Transparent Fictions: Big Data, Information and The Changing Miseen-Scène of (Government and) Surveillance. Surveillance & Society 13 (3/4): 354–69. Noble, Safiya Umoja (2018): Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press. Perset, Karine (2010): The Economic and Social Role of Internet Intermediaries. DSTI/ICCP(2009)9/FINAL. OECD. Accessible at http://www.oecd-ilibrary.org/ content/workingpaper/5kmh79zzs8vb-en, 6. 4. 2019. Pozen, David E. (2018): Transparency’s Ideological Drift. The Yale Law Journal 128: 100–165. Rahwan, Iyad (2018): Society-in-the-Loop: Programming the Algorithmic Social Contract. Ethics and Information Technology 20 (1): 5–14. Roberts, Sarah T. (2016): Commercial Content Moderation: Digital Laborers Dirty Work. Media Studies Publications 12. Accessible at https://ir.lib.uwo.ca/ commpub/12, 16. 1. 2019. Rouvroy, Antoinette and Thomas Berns (2013): Gouvernementalité algorithm- ique et perspectives d’émancipation, Faced with algorithmic governmentality. Réseaux, no. 177 (May): 163–96. Sandoval, Marisol (2014): From Corporate to Social Media: Critical Perspectives on Corporate Social Responsibility in Media and Communication Industries. Abingdon: Routledge. Smyrnaios, Nikos (2016): L’effet GAFAM: stratégies et logiques de l’oligopole de l’internet. Communication & langages, (188): 61–83. Trottier, Daniel (2016): Social Media as Surveillance: Rethinking Visibility in a Converging World. London: Ashgate. Turilli, Matteo and Luciano Floridi (2009): The Ethics of Information Transparency. Ethics and Information Technology 11 (2): 105–12. Van Dijk, Teun A. (2005): Discourse Analysis as Ideology Analysis. In Language & Peace, Christina Schaffner and Anita L. Wender (eds), 17–33. Abingdon: Routledge. Williams, Alex (2015): Control Societies and Platform Logic. New Formations 84 (84): 209–27. Zuboff, Shoshana (2015): Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology 30 (1): 75–89. Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 796 SOURCES Angwin, Julia and Hannes Grassegger (2017): Facebook’s Secret Censorship Rules Protect White Men…. ProPublica. 28 June 2017. Accessible at https://www.pro- publica.org/article/facebook-hate-speech-censorship-internal-documents-algo- rithms, 3. 4. 2019. Beer, David (2015): The Embedded Power of Algorithms. OpenDemocracy. Accessible at https://www.opendemocracy.net/david-beer/embedded-power- of-algorithms, 16. 4. 2015. Clinton, Hillary Rodham (2010): Remarks on Internet Freedom. U S Department of State - Diplomacy in Action. Accessible at http://www.state.gov/secretary/ rm/2010/01/135519.htm, 21. 1. 2010. Cobbe, Jennifer (2018): The Problem Isn’t Just Cambridge Analytica or Facebook – It’s “Surveillance Capitalism”. OpenDemocracy. Accessible at https://www. opendemocracy.net/uk/jennifer-cobbe/problem-isn-t-just-cambridge-analytica- or-even-facebook-it-s-surveillance-capitali, 6. 2. 2019. Community Standards. Accessible at https://www.facebook.com/communitystand- ards/, 28. 3. 2019. Diakopoulos, Nicholas (2014): Algorithmic-Accountability: The Investigation of Black Boxes. Tow Center for Digital Journalism, Columbia University. Accessible at https://www.cjr.org/tow_center_reports/algorithmic_accountability_on_ the_investigation_of_black_boxes.php, 22. 2. 2019. ———. 2018. Digital Detritus: “Error” and the Logic of Opacity in Social Media Content Moderation. First Monday 23 (3). ———. 2018. Facebook and YouTube Just Got More Transparent. What Do We See? Nieman Lab (blog). 3. 5. 2018. Accessible at http://www.niemanlab.org/2018/05/ facebook-and-youtube-just-got-more-transparent-what-do-we-see/, 14. 3. 2019. Fisher, Max (2018): Inside Facebook’s Secret Rulebook for Global Political Speech. The New York Times, 27 December 2018, sec. World. Accessible at https:// www.nytimes.com/2018/12/27/world/facebook-moderators.html, 11. 1. 2019. Hern, Alex (2018): Santa Clara Principles Could Help Tech Firms with Self- Regulation. The Guardian, 9. 5. 2018, sec. Technology. Accessible at http://www. theguardian.com/technology/2018/may/09/santa-clarita-principles-could-help- tech-firms-with-self-regulation, 13. 4. 2019. Klein, Ezra (2018): Mark Zuckerberg on Facebook’s Hardest Year, and What Comes Next. Vox. 2. 4. 2018. Accessible at https://www.vox.com/2018/4/2/17185052/ mark-zuckerberg-facebook-interview-fake-news-bots-cambridge, 19. 1. 2019. Merrill, Jeremy B. and Ariana Tobin (2019): Facebook Moves to Block Ad Transparency Tools. ProPublica. 28. 1. 2019. Accessible at https://www.propub- lica.org/article/facebook-blocks-ad-transparency-tools, 9. 3. 2019. ———. 2018b. Our Continued Commitment to Transparency. Facebook Newsroom (blog). 15. 11. 2018. Accessible at https://newsroom.fb.com/news/2018/11/ updated-transparency-report/, 28. 3. 2019. ———. 2017. Reinforcing Our Commitment to Transparency. Newsroom (blog). 18. 12. 2017. Accessible at https://newsroom.fb.com/news/2017/12/reinforcing- our-commitment-to-transparency/, 28. 3. 2019. Artur de MAtos Alves TEORIJA IN PRAKSA let. 56, 3/2019 797 ———. 2018a. Reinforcing Our Commitment to Transparency. Newsroom (blog). 15. 5. 2018. Accessible at https://newsroom.fb.com/news/2018/05/transparency- report-h2-2017/, 28. 3. 2019. Reform Government Surveillance 2018. “Putting Principles into Practice’’. Accessible at https://www.reformgovernmentsurveillance.com/principles/, 17. 10. 2019. Santa Clara Principles on Transparency and Accountability in Content Moderation. n.d. Santa Clara Principles. Accessible at https://santaclaraprinciples.org/ images/scp-og.png, 25. 2. 2019. Sonderby, Chris (2016): Global Government Requests Report. Facebook Newsroom (blog). 28. 4. 2016. Accessible at https://newsroom.fb.com/news/2016/04/ global-government-requests-report-5/, 28. 3. 2019. Tobin, Ariana, Madeleine Varner and Julia Angwin (2017): Facebook’s Uneven Enforcement of Hate Speech Rules…. ProPublica. 28. 12. 2017. Accessible at https://www.propublica.org/article/facebook-enforcement-hate-speech-rules- mistakes, 28. 3. 2019. ———. 2015. Transparency in Search of a Theory. European Journal of Social Theory 18 (2): 150–167. Transparency Report. Accessible at https://transparency.facebook.com/, 28. 3. 2019. Valentino-DeVries, Jennifer, Jeff Larson and Julia Angwin (2017): Facebook Allowed Political Ads That Were Actually Scams…. ProPublica. 5. 12. 2017. Accessible at https://www.propublica.org/article/facebook-political-ads-malware-scams-mis- leading, 28. 3. 2019. Zuckerberg, Mark (2018): A Blueprint for Content Governance and Enforcement. Facebook. 15. 11. 2018. Accessible at https://www.facebook.com/notes/mark- zuckerberg/a-blueprint-for-content-governance-and-enforcement/1015644312 9621634/, 11. 2. 2019.