LEXONOMICA Vol. 15, No. 1, pp. 33–52, June 2023 https://doi.org/10.18690/lexonomica.15.1.33-52.2023 CC-BY, text © Mezei, Szentgáli-Tóth, 2023 This work is licensed under the Creative Commons Attribution 4.0 International License. This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use. https://creativecommons.org/licenses/by/4.0 SOME COMMENTS ON THE LEGAL REGULATION ON MISINFORMATION AND CYBER ATTACKS CONDUCTED THROUGH ONLINE PLATFORMS Accepted 13. 2. 2023 Revised 6. 4. 2023 Published 29. 6. 2023 KITTI MEZEI, BOLDIZSÁR SZENTGÁLI-TÓTH Institute for Legal Studies, Centre for Social Sciences, Budapest, Hungary mezei.kitti@tk.hu, szentgali-toth.boldizsar@tk.hu CORRESPONDING AUTHOR mezei.kitti@tk.hu Keywords cyber attacks, freedom of expression, online platforms, marketplace of ideas, digital services act Abstract What are the tools to effectively manage the new threats to the free flow of opinions to protect this essential precondition for a pluralist social and political system? As a basis for action at the community level, how can we protect users who use internet platforms to inform themselves on issues of public interest from disinformation attacks through cyberspace? Effective action against cyber-attacks that adversely affect certain fundamental rights requires a combination of instruments, creating the technological, economic, human and legal conditions for meaningful counter-measures. In legal terms, the guarantees that platform providers must offer each user to prevent cyber attacks and illegal content should be laid down, and legal instruments should be put in place to ensure they are always available. In addition, in the event of misusing any content shared on the platform or of personal data made available to the operator, clear responsibilities should be established, and the extent to which the responsibility for protection lies with the platform operator or the user should be clarified. In our study, we outline regulatory options to address these challenges. 34 LEXONOMICA. 1 Introduction In recent years, social media has become the main channel for our digital communication and now includes several platforms that are also used by "cybercriminals". They have already appeared on Facebook, Instagram, Twitter, Snapchat, WhatsApp and most recently, Telegram, a messaging app with bot functionality. Among online platforms, some social media platforms have, therefore, undoubtedly become new arenas for cybercrime. The following cases will be presented in detail under this heading: cyber-attacks (hacking and malware), phishing and data leakage, online fraud and extortion, and the dangers inherent in using deepfake technology. In the second part of the paper, we will look at how to effectively counter the new threats to the free flow of opinion in social media to protect this essential precondition for a pluralist social and political system. Furthermore, how to protect users of these Internet platforms who are informed on issues of public interest from disinformation attacks from cyberspace as a basis for action at the community level. Many important motivations for cybercrime have been analysed in detail in the relevant literature (Wall, 2008: 45-63; Gillespie, 2019; Britz 2013; Nagy and Nagy, 2009; Szathmáry 2012; Mezei, 2020), and we will briefly discuss several of them in our paper. However, shaping public opinion, influencing decision making and political gain are motives that have been less discussed so far when researching the background of attacks in the online space. Perhaps this is also why, given the issue’s importance, few studies have so far focused on the impact of new technologies on the exercise of freedom of expression and the marketplace of political opinion (Bartóki-Gönczi, 2018: 157-194; Klein, 2018: 26-29; Koltay, 2019; Török, 2022: 195- 208). However, there is a growing need for a more in-depth understanding of this context, and it is becoming increasingly clear that the impact of new technologies on public debate may continue to grow in the coming period. This is why we believe it is essential for experts working on current issues of freedom of expression and cybercrime to work together to explore the complexities of this combined area and formulate common proposals on the principles and specific content of the relevant legislation. For this purpose, the relevant case law of the European Court of Human Rights (hereinafter: ECtHR) will be also discussed. K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 35. 2 The concept of cybercrime First, it is important to provide a brief conceptual overview of cybercrime. The term "cyberspace" - a combination of the words cybernetics and space - was coined by the American writer William Gibson in his 1982 novel Burning Chrome, later made famous by his novel Neuromancer. Gibson used cyberspace to describe the global computer network that connects people, computers and information sources. The resulting Anglo-Saxon term cybercrime was coined after the word cybercrime. The term cybercrime is now widely used, especially in international literature, but also, for example, in the Convention on Cybercrime. 1 However, it is important to note that any generally accepted and uniform legal definition of cybercrime has not been elaborated yet. In international academic scholarship, several authors, such as Jonathan Clough (Clough, 2015: 10-11), Peter Grabosky (Grabosky, 2016: 8-9) and Susan W. Brenner (Brenner, 2010: 39-47), consider cybercrime as a generic term, with two main subcategories: one is constituted by a group of offences that can be committed exclusively with information systems (e.g. computers, their networks or other information and communication technologies - ICT). Typically, the object of such an offence is the information system. These include the so-called cyber-dependent crime (e.g. use of computer viruses, hacking, etc.). The second, broader category covers traditional crimes committed using information systems (e.g. fraud, extortion, child pornography, copyright infringement, harassment, etc.). These are similar to the case of cyber-enabled crime, where the information system serves as the instrument used to commit the crime (Clough, 2015: 10-11). 2 I t c a n b e c o n c l u d e d t h a t t h e m o t i v e s a n d a i m s o f c r i m e s c o m m i t t e d i n t h e I T environment are generally no different from those of crimes committed in real space because they can be committed for profit or damage, to obtain data or secrets, or even for sexual motives. However, political gain through manipulation and the 1 Convention on Cybercrime of the Council of Europe, signed in Budapest on 23 November 2001 and promulgated in Hungary by Act LXXIX of 2004. 2 The term cyber-related crime is used by the US Department of Justice when the computer is the instrument of traditional crime. At the same time, the Budapest Convention also refers to offences committed by using a computer. See U.S. Department of Justice, The National Information Infrastructure Protection Act of 1996, Legislative Analysis, 1996; Council of Europe, Explanatory Report to the Convention on Cybercrime, European Treaty Series - No. 185, 2001, Article 79. 36 LEXONOMICA. dissemination of disinformation may also involve motives that are not relevant to traditional crimes. This is perhaps one of the reasons why these aspects have received less attention so far, even though they are becoming increasingly important for criminals in the virtual world. The regulation of cybercrime offences in the area of criminal law should be essentially a national competence, as criminal law falls typically, but not exclusively, within the domestic jurisdiction of a state, and national criminal laws are therefore supposed to be best placed to deal with the issue. Thus, in the present study, we examine specific cases of cybercrime concerning domestic criminal law provisions. However, it is important to note that domestic law is significantly influenced in this area by the aforementioned Budapest Convention and its Additional Protocol (Daskal and Kennedy-Mayo, 2020), 3 as well as by relevant EU legislation (e.g. the Directive 2013/40/EU on attacks against information systems). 3 Some cases of cybercrime on online platforms Various forms of businesses are increasingly dependent on online platforms, while individuals tend also using them even more frequently, and the coronavirus epidemic has also increased considerably the online presence of each person. Users are often unaware of the dangers of being online and the consequences of sharing information. As a result, sensitive data may be increasingly easy for unauthorised persons to obtain (e.g. through malware, phishing, and other methods developed for this purpose). Social media platforms offer an easy way for hackers to reach or map their targets. The rapidly evolving number of social media users has made online platforms attractive to cybercriminals, which means that this latter group has become a significant source of malware (Sorbán, 2018: 376-377) 4 infections, for 3 In 2003, the Budapest Convention was supplemented by a Protocol on criminalising racist and xenophobic acts committed through computer systems. In September 2017, the Council of Europe decided to draw up a second Additional Protocol to the Convention, which would include provisions for a more effective and simpler mutual legal assistance system. This system would allow for direct cooperation with service providers established in another State Party to the Convention, and searches could be carried out across borders. The protocol will include strong safeguards and data protection requirements. The advantage of such an agreement is that it could become applicable worldwide. See Daskal 2020. 4 Malware is short for malicious software, which refers to malicious programs that are commonly used to gain unauthorised access to information systems or to make changes without the user's knowledge or consent, causing damage to data. Increasingly, they are being used to gain access to confidential data that facilitates further fraud or other breaches (e.g. extortion, identity theft, etc.). For more on the different types of malware, see Sorbán, 2018: 376- 377. K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 37. example, affecting both individuals and businesses. The problem is growing, with these platforms being particularly well suited to malware distribution, as they tend to display more images, videos, advertisements and plugins. In addition, the nature of interaction through social networks facilitates the rapid and seamless spread of infection – an issue complicated by the tendency of social media to allow sharing of user profiles across multiple platforms. One typical example of this was phishing links on Facebook Messenger, which were used to redirect victims to a page similar to YouTube. After downloading an update, users were infected with malware that could obtain passwords and other sensitive information (McGuire, 2019: 7-8). Malware is typically distributed through social media posts or messages, such as clicking on infected ads, content shared by friends (e.g. funny videos, pictures and news), and installed plugins and applications (e.g. games and tests). Often malware is sent as a message or redirected to a website. Still, drive -by downloads have also appeared, which are particularly dangerous because malware downloads itself, and exploits system vulnerabilities by embedding itself in the website or application. So- c a l l e d b o t s a r e o f t e n f o u n d o n s o c i a l m e d i a s i t e s a n d c a n b e u s e d t o g e n e r a t e automated messages and distribute malware. For example, a chatbot in Facebook Messenger can be used to send messages that may amount automatically to a criminal offence (Ambrus, 2021: 46). In addition to malware, it is common for personal or business accounts to be hacked (so-called hacking or unauthorised access), which can lead to the user taking control of the account and gaining access to all the d ata associated with the account (such as credit card and other personal data). They often target accounts that are "verified". Setting up two-step authentication for user accounts is particularly important to avoid such attacks. These cases are all related to offences against the information system: the perpetrators commit a breach of the information system or data (Section 423 of the Hungarian Criminal Code), if they access the user account without authorisation by circumventing the technical measures or carry out unauthorised data manipulation with the malicious program, or fraud is committed using the information system (Section 375 of the Hungarian Criminal Code), if, for example, transactions are carried out with the unauthorised credit card data. One reason social media should be particularly popular with offenders is that creating a fake profile is now fairly easy. Facebook recently deleted over 5 billion fake accounts from its entire platform. Fake profiles are typically used to deceive 38 LEXONOMICA. other users, for example, by using social engineering techniques to trick them into clicking on infected links or even sharing sensitive information. Based on the related practice of the National Authority for Data Protection and Freedom of Information cases, where unknown persons create a fake profile on a social networking site using the user’s name and photos, are considered cases of suspected criminal offences. Through this fake profile, the perpetrator identifies the real acquaintances of the impersonated user and conveyers messages and posts on behalf of the victim user. The aim is often to discredit the person concerned, tarnishing their reputation in the eyes of others, which can significantly damage their interests (Péterfalvi and Eszteri, 2017: 411). Other people's personal information is also possibly used to commit crimes (e.g. u s i n g a f a k e p r o f i l e t o d e f r a u d o t h e r unsuspecting users of money or credit card details). The fake profile method often involves using other users' personal information (e.g. name and photos), which can be followed by further fraud or extortion. Examples include so-called romantic scams, where a person uses social networking sites to gain the other party’s trust with the appearance of seeking a partner and deceiving or blackmailing them with compromising pictures (Gyaraki, 2021: 77). In the case of fake profiles, the offence of misuse of personal data (Section 219 of the Hungarian Criminal Code) arises. In addition, data leaks pose a challenge when the mass personal data of users of social media platforms are made public, with legal consequences for data protection (see Cambridge Analytica scandal; Hu, 2020). Online fraud is often carried out by posting content on behalf of public figures. For example, on Twitter, a message from Elon Musk was available with the title "Dojo 4 Doge", and the scammers responded to this message and shared a link. The compelling message led to a professional website where they tried to persuade visitors to send bitcoins supposedly to Elon Musk because if they did, they would soon receive double the amount of Musk's investment. The website even said that senders above a certain amount could win the grand prize, which would be a Tesla Model S (Jankowich, 2021). In addition, social media platforms are increasingly being used to present investment and cryptocurrency scams (in the form of offers of quick returns on investment) that take advantage of the popularity of these virtual currencies and new types of investment (for example, more than 15 000 bots have been identified on Twitter in connection with cryptocurrency scams). These cases K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 39. are considered traditional fraud (Article 374 of the Hungarian Criminal Code), whereby natural persons are misled by for-profit and cause damage. Both adults and children are at risk of online sexual blackmail (sextortion), but the latter are particularly vulnerable (Powell and Nicola, 2017: 122-124). The perpetrator often gains the child's trust (for example, the adult poses as a minor and befriends the child, showing the child explicit sexual material to reduce their sexual inhibitions) and exploiting their vulnerability. This is done to gain access to sexual images or videos of the child, 5 followed by a phase of blackmail. The perpetrator coerces and blackmails the victim into performing sexual favours for him or sending further compromising images or videos of themselves. If the request is not complied with, they threaten to share the footage already in their possession (for example, via social media), thereby gaining control over the victim. 6 Finally, it is worth mentioning the rise of deepfake technology, which is relatively new and poses an increasingly serious challenge to society. In the case of a deepfake, an algorithm can replace the facial image in a person's video with another person's appearance, which can be deceptive to anyone. In addition to the damage caused to the individual (like revenge porn), 7 deepfake can contribute to disinformation (Whyte, 2020: 199-217), distortion of democratic decision-making and manipulation of the electoral process, eroding public trust and exacerbating societal divisions. So far, we have focused on the main characteristics and mechanisms of cyber- attacks; in the following, we will turn to the directly related aspects of freedom of expression. We will conclude with our de lege ferenda proposals on the principles and elements of regulation. 5 For example, Snapchat is a video-sharing portal and app that is particularly popular among young people, where users can share a picture or video for a time that they set themselves, in addition to a text message. 6 Europol, Internet Organised Crime Threat Assesment (IOCTA), 30. 7 "The Anglo-Saxon literature on this issue is dealt with in detail" in Mania, 2020: 2079-2097 and in Marcum et al., 2021: 646-658. 40 LEXONOMICA. 4 Freedom of expression on online platforms 4.1 Online platforms as spaces for freedom of expression Over the past decades, and especially in recent years, platforms in virtual space have become the primary arena for the clash of political opinions. As more and more voters are using online platforms intensively, and as the content, they wish to disseminate reaches their target audiences more quickly and effectively through these channels, they have become increasingly popular for political communication. Political parties and candidates approach their voters primarily in virtual space, and citizens reflect on the views shared with them via virtual channels of communication (Török, 2017: 721-733). Private opinions are also confronted mainly on the Internet, where they can be disseminated with unprecedented speed and efficiency and often anonymously commented on. 8 These processes were particularly accelerated during the pandemic when curfews and contact restrictions forced virtually everyone to increase their online presence (Matwyshyn, 2018: 450-500). In addition, some of the measures taken in the context of the epidemic explicitly affected certain forms of expression; for example, assemblies were banned in many places or allowed only within strict limits and with a restricted number of participants. Under these circumstances, organising electoral campaigns or maintaining political discourse for citizens interested in participating was necessary. This was only possible, or at least predominantly likely, through online platforms, whose role was thus even more important. The virtualisation of political communication raises several political science and sociological questions, and in this paper, we will deal with the political aspects of this problem. Only a minority of users are aware of these platforms and the risks involved in using them, mainly due to the malicious cyber activities outlined above (Burton, 2019: 16-133). Furthermore, in most cases, by making their views public, the persons expressing them are unaware they are becoming parties to a legal relationship involving at least three parties (Pavlova, 2020: 391-418). The platform 8 "The right to freedom of expression and the use of encryption and anonymity in digital communications - Submission to the United Nations Special Rapporteur on the Right to Freedom of Opinion and Expression by the Association for Progressive Communication (APC)." https://www.ohchr.org/Documents/Issues/Opinion/Communications/AssociationForProgressiveCommunicati on.pdf (accessed Oct 9. 2022). K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 41. operator provides a platform for individuals to express and share their views with other users and, in this context, to exchange views. The operator is primarily responsible for the proper operation and continued availability of the platform but, to a limited extent, must also be responsible for the content shared on the online platforms it maintains. On the other hand, individual users of the service may be primarily responsible for possible infringements through their own communications, for example, by publishing hate speech (Török, 2013: 59-72). In addition, we must also consider the possibility of third parties becoming involved in the legal relations relating to the online exchange of opinions, such as those who engage in phishing or seek to distort democratic discourse. 4.2 Disinformation Cyberattacks affect the development of the democratic discourse on platforms in two ways: on the one (Nojeim, 2010: 119-137) hand, they distort the marketplace of opinions at the system level, and on the other hand, they can act as a disincentive for individuals to engage in public debate (Fathy, 2018: 96-115). In this subsection, we first address the most important systemic risk, the creation and dissemination of disinformation. According to the European Commission, disinformation is "information that is verifiably false or misleading, is created, published or disseminated for commercial advantage or with intent to deceive, and is likely to harm the public interest." 9 In 2018, the European Commission also set up a group of experts to explore the mechanisms linked to the spread of fake news and online disinformation. 10 There can be four main strands to the disinformation phenomenon, and new technologies can contribute to them. On the one hand, by generating fictitious user profiles and articulating certain positions through them, cyber activity can bring to the fore aspects of the discourse that there is no real social need to discuss (Garnett and James, 2020). This is also facilitated by the fact that real persons often express their views on online platforms under pseudonyms or even anonymously, so users cannot 9 "Joint Report of the European Commission and the High Representative of the Union for Foreign Affairs and Security Policy on the implementation of the Joint Action Plan against disinformation." Jun 14. 2019. https://eur- lex.europa.eu/legal-content/HU/TXT/PDF/?uri=CELEX:52019JC0012&from=EN (accessed Feb 19. 2023). 10 Final report of the High Level Expert Group on Fake News and Online Disinformation. March 12. 2018. https://digital-strategy.ec.europa.eu/en/library/final-report-high-level-expert-group-fake-news-and-online- disinformation (accessed May 3. 2022). 42 LEXONOMICA. distinguish between comments and positions representing the real person and fictitious ones (Bayer et al., 2021). Another alternative may be to exaggerate or even relativise the importance of the positions already present, which can influence public opinion because one increasingly draws conclusions about the current state of public opinion based on the communications one sees on online platforms (Bartóki-Gönczi, 2018: 157-194). Thus, if we perceive that most participants support a particular position, in some cases as a result of manifestations generated in whole or in part by cyber tools, we will assume that the public mood is the same. It is a sociological issue but also has constitutional implications through the influence of the electorate’s will. The creation of this subjective feeling can have a considerable impact on public discourse and even on the outcome of individual elections (Caramancion, 2020: 440-444; Kovács and Krasznay, 2017: 3-15). The most organised forms of this manipulation of public opinion are troll farms, which are set up to influence the political process and decision-making by disseminating false information. 11 The third related trend is using fictitious profiles on online platforms to present facts without real basis. The extremely high risk of such disclosures is particularly striking, given the structure of modern public opinion (Larsen, 2018: 178). Even the most astonishing fake news reaches a wide range of users quickly and spreads the fastest, while the content that exposes its falsity is much less interesting. A big fake news story can, therefore, significantly negatively impact the perception of specific public figures or even on assessing certain issues on the political agenda. The creation of fictitious profiles is subject to criminal prosecution as a misuse of personal data, as well as establishes civil liability as a violation of the right to image. Moreover, the managing structure of a particular website and the funder of certain advertisements remains hidden for the public, which enhances the probability of spreading disinformation through these manners. The digital services act addresses these issues with the duty of platform providers to remove any illegal content from their spaces; the annual monitoring by the member states could enforce the proper execution of this mandate. Nevertheless, one should expect that the issue of fictitious profiles will not disappear from the agenda for a longer period. 11 For more on troll farms, see Hall, 2018. K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 43. Finally, we must also take into account the form of disinformation, when factually true statements appear out of context, distorted in such a way that it is suitable to mislead consumers of the content, to create an impression that is not actually true, without stating factually false facts (Waldman, 2018: 101-105). In addition to the systemic negative consequences, we will now look at the threats at the individual level that may deter individuals from engaging in online public debate or affect unsuspecting individuals who express their views in virtual spaces. 4.3 Cyber threats to freedom of expression through virtual platforms 4.3.1 Data protection concerns One main motivation for platform cyber activity at the individual level is illegal data acquisition (Judge and Pal, 2021b: 1-56). The purpose of this can be twofold: on the one hand, to profile the individuals concerned, even based on their views, and on the other hand, to misuse the personal data acquired (for example, for financial benefit) without any political motive. The data protection challenges related to the fate of the personal data of platform commentators can be grouped into four main categories. On the one hand, the identity of the natural persons behind cyber-attacks is often untraceable or very difficult to trace. So it is not transparent who is getting hold of our personal data that is not sufficiently protected (Moses, 2007: 274-275). The result of this lack of transparency is that we can completely lose control over the fate of our personal data, and often, enough information to build a personality profile can end up in hands we don't even know. On the other hand, the fact that people with opaque backgrounds can gain access to users' personal data is not a major issue in itself, but it is unpredictable what the phishers intend to do with the personal data they have unlawfully processed (Rubinstein, 2014: 861-936). This is particularly important given that when we express our opinions on online platforms, we often take positions on sensitive issues with anonymity. At the same time, risk can also be guaranteed (Koltay and Nyakas, 2018: 18-19). Thirdly, cyber tools can also be used to recognize anonymous commentators, identify stakeholders who have given their names, and reveal 44 LEXONOMICA. personal data they did not wish to share. However, in many cases, we are not simply talking about personal data but about particularly sensitive personal data, which is why many people prefer to stay away or refrain from online discourse at a time when more and more of the dialogue on issues of public interest is being shifted to these platforms. The fourth cyber threat to the integrity of the platforms is the organisation of the acquired data of the users of these platforms into databases, which can provide cybercriminals with a picture not only of the individual but also of their place in the wider social environment. 4.3.2 Obstructing democratic discourse As a main delightful actor rather than data protection issues, the individual’s situation in an increasingly virtual democratic space is becoming increasingly difficult for individual citizens to navigate in an increasingly complex and opaque marketplace of opinions (Green, 2019: 1389-1444). This is something that a significant proportion of platform users are aware of, with the decreasing reliability of information sources and the increasing manipulation of political communication (Judge and Korhani, 2021a). This reinforces apolitical tendencies in society, further hindering the development of an inclusive democratic discourse. A further severe difficulty around platforms is that their operators often moderate the content of public discourse by removing undesirable content, at most according to their internal rules (Parti, 2012: 49-69). Such interference is usually always aimed at a consensual goal, such as curbing hate speech and protecting human dignity or the dignity of individuals or well-defined social groups. However, there are already significant differences in interpreting these concepts, and many people feel that their expressions have been unfairly removed from the opinion market by platform operators. Moreover, the operation of platforms is poorly regulated by law, and the background of their operators is often not very transparent, so the framework of political discourse and often the limits of the individual's freedom of communication are decided by the platform operators, i.e. private actors with no public authority (Judge and Korhani, 2020: 240-261), and often without the moderators themselves or the interest groups behind them being identifiable. K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 45. 5 The case law of the European Court of Human Rights The ECtHR is well aware of the importance of the Internet in relation to freedom of expression. It declared: "The Court notes at the outset that user-generated expressive activity on the Internet provides an unprecedented platform for the exercise of freedom of expression." 12 Another recurring stance laid out in most of the relevant case law concerns the increased general access to news and the platform as a source of dissemination. The ECtHR held that "the Internet plays an important role in enhancing the public’s access to news and facilitating the dissemination of information in general." 13 Several valid reasons were provided why the online marketplace of ideas ought to be examined and regulated differently than other communication tools. Considering the possible effects, the ECtHR found that audiovisual media have a more immediate and powerful effect than print media. 14 The Jersild judgement explains, "The audiovisual media have means of conveying through images meanings which the print media are not able to impart." 15 By itself, this is definitely not a negative phenomenon. It means that we can send and receive information quicker than ever before. On the other hand, the ECtHR is also right that there is immense risk in all of this because even if a post is factually incorrect or intentionally or inadvertently misleading, it will remain online long enough to be seen by a wide audience. Hence, "The risk of harm posed by contents and communications on the Internet to the exercise and enjoy human rights and freedoms, particularly the right to respect for private life, is certainly higher than that posed by the press." 16 It can be argued that from the standpoint of a state, this principle can especially serve as a legitimate aim to intervene in people’s freedom of expression by passing stricter regulations during a public health emergency or during a period of war. There is another matter we must touch upon, which is the duty and responsibilities of media portals. Portals which provide a forum for exercising the right of expression, enabling the public to impart information and ideas, "must be assessed 12 Delfi AS v. Estonia (ECtHR, June 16, 2015, no. 64569/09). § 110. 13 Delfi AS v. Estonia (ECtHR, June 16, 2015, no. 64569/09). § 133. 14 Monnat v. Switzerland (ECtHR, December 6, 2006, no. 73604/01). § 68. 15 Jersild v. Denmark (ECtHR, September 23, 1994, no. 15890/89). § 31. 16 Editorial Board of Pravoye Delo and Shtekel v. Ukraine (ECtHR, May 5, 2011, no. 33014/05). § 63. 46 LEXONOMICA. in the light of the principles applicable to the press." 17 The question is whether such portals can be classified as publishers of third-party content. The ECtHR established in a notable case that these sites are "not publishers of the comments in the traditional sense". But this does not mean that they have no responsibility at all: "Internet news portals must, in principle, assume duties and responsibilities." 18 Therefore, internet platform providers’ duties differ greatly from those of traditional publishers and include "(a) large news portal’s obligation to take effective measures to limit the dissemination of hate speech and speech inciting violence". However, there is a limitation: this "can by no means be equated to private censorship". 19 In summary: the platform holder bears responsibility not because it is the publisher but upon consideration of three conditions: if (a) the publication of the comments is in its financial interest, (b) it increases the page’s popularity, and (c) no "notice and takedown" system or anything similar is in place that would result in the immediate removal of the offensive comments. 20 It is, therefore, in the interest of these portals to create a moderated online environment if they wish to avoid being punished for their unwillingness to take down harmful comments. The platforms provided to user-generated content also have an important role because, as the ECtHR determined, they foster the emergence of citizen journalism, and therefore content ignored by traditional media can prevail. 21 6 Recommended regulatory solutions Cyber-attacks may occur via online platforms because these platforms are extremely under-regulated, with no legal codes covering the related liability (Howard, 2018). It is not clear what the legal obligations of the platform operator or user are nor what behaviour they are obliged to adopt to prevent cyber-attacks. An excellent example to demonstrate this global uncertainty may be the decision of the French Constitutional Council in June 2020, which annulled a law under which a platform operator could be fined a substantial amount of money if it did not remove the infringing content from its platform within 36 hours after it was published. 22 This 17 Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (ECtHR, February 2, 2016, no. 22947/13). § 61. 18 Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary (ECtHR, February 2, 2016, no. 22947/13). § 61. 19 Delfi AS v. Estonia (ECtHR, June 16, 2015, no. 64569/09). § 57. 20 Delfi AS v. Estonia (ECtHR, June 16, 2015, no. 64569/09). § 162. 21 Cengiz and Others v. Turkey (ECtHR, December 1, 2015, no. 48226/10 and 14027/11). § 52. 22 Décision no. 2020-801 dc, 18 june 2020, K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 47. decision also showed that no clear legal requirements have been set for the various participants in the exchange of ideas on online platforms (Szentgáli-Tóth, 2021). There is a broad consensus that legislation needs to take action in this area, but it is questionable what direction it should take to improve the situation. At the European level, a draft-specific regulation on digital services is currently being discussed as part of the legislative package on digital services. This proposal would bring several innovations for platform operators. In essence, it would not generally sanction platform providers for failing to remove illegal content shared on their platforms. Still, it would oblige these stakeholders to remove the communication in question immediately if they become aware of the illegality. While this is not yet fundamentally revolutionary, as it is all in line with existing legal requirements and practice, it does impose new obligations on the most prominent platform providers: they must disclose the principles of their artificial intelligence-based algorithms that analyse communications and how they decide to remove certain content from their platforms. This will therefore provide some transparency to users on how their comments are judged and may reduce one of the factors that can deter many from commenting on online platforms. However, neither the Digital Services Act nor any other currently known draft legislation addresses how to alleviate the pressure on platforms to promote the safety of those who comment on them. In our view, the starting point in addressing the challenges to freedom of expression is that only an integrated approach and a combination of instruments can achieve meaningful results in this area (Lund, 2012: 170-186). Several technological, economic, personal and legal conditions would be necessary for cleaning the discourse on platforms from manipulated content and cyber-attacks. One needs, to talk about technological requirements because One needs to constantly improve the IT solutions that can prevent malicious interference from cyberspace in the operation of platforms. It is also necessary to continue this reflection from an economic point of view because cybercrime affecting platforms is primarily motivated by such factors, which we must identify and counteract. On a personal level, an essential prerequisite for platform protection is to have professionals who can both identify the main challenges and work out the best ways to tackle them and https://www.conseil-constitutionnel.fr/decision/2020/2020801DC.htm (accessed Oct 21. 2022). 48 LEXONOMICA. who can also be involved in the day-to-day defence with the technologies at their disposal. The legal regulation should reflect the pressure on the platforms, taking into account the above aspects. In our view, one way of doing so could be to share the responsibility for protection between the platform operator and the user. For operators, there should be requirements on what security measures they must take and what technological solutions they should use to prevent cyber attacks. Suppose platform providers fail to comply with this obligation within a timeframe that gives them sufficient time to prepare. In that case, they could be fined and, ultimately, forced to cease operating the platform. At the same time, the responsibilities of users should be clarified: in which cases are users of platforms expected to act prudently, and what is the imprudent behaviour in virtual space that should be carried out without the user having to bear the consequences of their carelessness? This could be the case, for example, where a user makes personal data voluntarily available to an unreasonably large number of people, which are in no way related to the use of the platform or the opinions shared with others through the platform. Just as the legal status of the platform operator and its users in the context of online commenting is generally not elaborated, the same is not true for protecting against cyber-attacks. A more informed and effective response to this difficult-to-identify threat can only be expected if the respective actors may foresee their legal responsibilities in this area. Sanctions should be applied gradually and only as a last resort if no other means of enforcing risk mitigation can be found. In the European space, there may also be other legal instruments to strengthen cooperation between the Member States to curb cross-border cybercrime. As the fundamental characteristic of these offences is their cross-border nature, cooperation at the European level is of fundamental interest for crime prevention. On the one hand, there is a need for broader cooperation than is currently the case to obtain electronic evidence. On the other hand, data related to this type of crime should be made available to all interested authorities and researchers working on related issues. K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 49. 7 Concluding remarks Current trends in cybercrime have been addressed by several authors, and changes in freedom of expression are often analysed in the literature. Yet few attempts have been made to date to outline the legal implications of the challenges of freedom of expression in cyberspace by focusing on the intersection of these two seemingly distant disciplines. We consider this a serious shortcoming, especially in light of the fact, that an increasing part of the public discourse on public affairs is being shifted to online platforms, owing to the physical isolation resulting from the epidemic. Another meaningful issue derives from the fact that the existing and still insufficient legal framework including the ECtHR case law concentrates only on platform providers and users. At the same time, there are few studies and practical experiences that focus on the possible legal means of collective defence against external actors. We have proposed some basic principles and institutions for this legal concept, which needs to be developed, emphasising the sharing of responsibility between platform operators and users. We believe that recognising the true significance of cyber-attacks impact on democratic discourse and developing the legal environment accordingly, could help to reduce the manipulative nature of political communication on platforms, thereby affecting all forms of democratic participation and the daily lives of all, or at least many, citizens. In this paper, we have not sought for providing definitive answers to these prospective objectives but to draw attention to some new aspects of the dilemmas already discussed in the literature. However, further extensive professional discussion will be needed to develop a long-term approach to the challenges of freedom of expression, and we hope that our suggestions have contributed to setting the direction for this discourse. Acknowledgement This paper is funded by the Ministry of Innovation and Technology and the National Research, Development and Innovation Office within the framework of the NKFIH grant No. 128976; a138965; and the National Laboratory for Artificial Intelligence and the NKFIH grant No. 2019-2.1.11-TÉT- 2020-00243. 50 LEXONOMICA. References Ambrus, I. (2021) Digitalizáció és büntetőjog (Digitalisation and Criminal Law. Digitalisation and digitalization). Budapest: Wolters Kluver. Bartóki-Gönczi, B. (2018) A keresőmotorok és a szólásszabadság kapcsolata: Megközelítés ésszabályozási javaslatok az Európai Unióban és az Egyesült Államokban (The relationship between search engines and freedom of expression: approach and regulatory proposals in the European Union and the United States), Iustum Aequum Salutare, 14(1), p. 157-194. Bayer, J. et al. (2021) The fight against disinformation and the right to freedom of expression (European Parliament). Brenner, Susan W. (2010) Cybercrime - Criminal Threats From Cyberspace. Santa Barbara: CA, Praeger. Britz, Marija T. (2013) Computer Forensics and Cyber Crime: An Introduction. London: Pearson. Burton, J. (2019) Cyber-Attacks and Freedom of Expression: Coercion, Intimidation and Virtual Occupation, Baltic Journal of European Studies, 9(3), p. 16-133. Caramancion, Kevin M. (2020) An Exploration of Disinformation as a Cybersecurity Threat, 3rd International Conference on Information and Computer Technologies (ICICT), p. 440-444. Clough, J. (2015) Principles of cybercrime. Cambridge: Cambridge University Press. Daskal, J., Kennedy-Mayo, D. (2020) Budapest Convention: What is it and how is it Being Updated? Cross-Border Data Forum, 2 July 2020. Fathy, N. (2018) Freedom of expression in the digital age: enhanced or undermined? The case of Egypt, Journal of Cyber Policy, 3(1), p. 96-115. Garnett, Holly A. and James Toby S. (2020) Cyber Elections in the Digital Age: Threats and Opportunities of Technology for Electoral Integrity, Election Law Journal: Rules, Politics, and Policy, 19(2), p. 111-126. Gillespie, Alisdair A. (2019) Cybercrime - Key Issues and Debates. New York: Routledge. Grabosky, P. (2016) Cybercrime. London: Oxford University Press. Green, R. (2019) Counterfeit Campaign Speech, Hastings Law Journal, 70, p. 1445-1489. Gyaraki, R. (2021) A közösségi média hatása a kiberbűncselekmények elkövetésére (The impact of social media on cybercrime), Magyar Rendészet, 2, p. 67-82. Hall, Jamieson K. (2018) Cyberwar: How Russian Hackers and Trolls Helped Elect a President: What we Don't, Can't, and Do Know. Oxford: Oxford University Press. Hoofnagle, C. (2018) Facebook in the Spotlight: Dataism vs. Personal Autonomy, Jurist - Academic Commentary, Apr. 20, 2018. Howard, Philip N. (2018) Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. New York: Oxford University Press. Hu, M. (2020). Cambridge Analytica’s black box. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720938091. Jankowich, M. (2021) A man in Germany said he fell for a fake Elon Musk crypto scam that cost him $560,000, Business Insider. Judge, Elizabeth F. and Korhani Amir M. (2021) A Moderate Proposal for a Digital Right of Reply for Election-Related Digital Replicas: Deepfakes, Disinformation, and Elections, in Cyber- Threats to Canadian Democracy, edited by Holly A. Garnett and M. Pal (Canada) p. 270- 311. Judge, Elizabeth F. and Korhani Amir M. (2020) Disinformation, Digital Information Equality, and Electoral Integrity, Election Law Journal, 19(2), p. 240-261. Judge, Elizabeth F. and Pal, M. (2021) Voter Privacy in the Age of Big-Data Elections, Osgoode Hall Law Journal, 58(1), p. 1-56. Klein, T. (2018) Az online diskurzusok egyes szabályozási kérdései (Search Engine Providers and Internet Freedom of Expression), in Koltay, A. and Nyakas, L. Tanulmányok a technológia- és cyberjog néhány aktuális kérdéséről (Studies on some current issues of technology and cyber law), edited by T. Klein. Budapest: Médiatudományi Intézet, p. 11-40. K. Mezei, B. Szentgáli-Tóth: Some Comments on the Legal Regulation on Misinformation and Cyber Attacks Conducted Through Online Platforms 51. Koltay, A. and Nyakas, L. (2018) Tanulmányok a technológia- és cyberjog néhány aktuális kérdéséről (Studies on some current issues of technology and cyber law), edited by T. Klein. Budapest: Médiatudományi Intézet. Koltay, A. (2019) Az új média és a szólásszabadság. A nyilvánosság alkotmányosalapjainak újragondolása (The new media and the new constitutional foundations of the public sphere). Budapest: Wolters Kluwer. Kovács, L. and Krasznay, Cs. (2017) „Mert övék a hatalom”: Az internet politikát (is) befolyásoló hatása a 2016-os amerikai elnökválasztás során (Nation and Security”: The influence of the Internet on politics (also) is the American one in 2016 during the presidential election), Nemzet és Biztonság: Biztonságpolitikai Szemle, 3, p. 3-15. Larsen, Allison O. (2018) Constitutional Law in an Age of Alternative Facts, New York University Law Review, 93(2), p. 175-248. Lund, J. (2012) Correcting Digital Speech, UCLA Entertainment Law Review, 19(1), p. 170-186. Mania, K. (2020) The Legal Implications and Remedies Concerning Revenge Porn and Fake Porn: A Common Law Perspective, Sexuality & Culture, 24, p. 2079-2097. Marcum, Catherine D. et al. (2021) Exploration of Prosecutor Experiences with Non-consensual Pornography, Deviant Behavior, 42(5), p. 646-658. Matwyshyn, A. (2018) Cyber Harder, Boston University Journal of Science and Technology Law, 24, p. 450- 500. McGuire, M. (2019) Into the Web of Profit - Social Media Platforms and the Cybercrime Economy, Bromium-Web-of-Profit-Social-Platforms-Report. Mezei, K. (2020) A kiberbűnözés aktuális kihívásai a büntetőjogban (Current challenges of cybercrime in criminal law). Budapest: L'Harmattan - TK JTI. Moses, Lyria B. (2007) Recurring Dilemmas: The Law's Race to Keep Up with Technological Change, Journal of Law Technology & Policy, 2, p. 239-319. Nagy, A. and Nagy, Z. (2009) Bűncselekmények számítógépes környezetben (Crimes in a computer environment). Budapest: Ad Librum. Nojeim, Gregory T. (2010) Cybersecurity and Freedom on the Internet, Journal of National Security Law & Policy, 4, p. 119-137. Parti, K. (2012) Harc az online illegális tartalom ellen (Fighting illegal content online), Okri, 4, p. 49- 69. Pavlova, P. (2020) Human-rights based approach to cybersecurity: addressing the security risks of targeted groups, Peace Human Rights Governance, 4(3), p. 391-418. Péterfalvi, A. and Eszteri, D. (2017) A személyes adatok büntetőjogi védelme Magyarországon és a Nemzeti Adatvédelmi és Információszabadság Hatóság kapcsolódó gyakorlat (The criminal law protection of personal data in Hungary and the related practice of the National Authority for Data Protection and Freedom of Information), in Görög, M. et al. (eds.). Az Alaptörvény VI. cikkelyének érvényesülése a magyar jogrendszeren belül (VI of the Basic Law the validity of its article within the Hungarian legal system). Budapest: ELTE-ÁJK, p. 405-420. Powell, A. and Nicola H. (2017) Sexual violence in a digital age. London: Palgrave Macmillan. Rubinstein, Ira S. (2014) Voter Privacy in the Age of Big Data, Wisconsin Law Review, 5, p. 861-936. Sorbán, K. (2018) Vírusok és zombik a büntetőjogban. Az információs rendszer és adatok megsértésének büntető anyagi és eljárásjogi kérdései (Viruses and zombies in criminal law. Criminal substantive and procedural issues of information system and data breaches), In Medias Res, 2, p. 369-386. Szathmáry, Z. (2012) Bűnözés az információs társadalomban – Alkotmányos büntetőjogi dilemmák az információs társadalomban (Crime in the Information Society - Constitutional Criminal Law Dilemmas in the Information Society), PhD thesis. Budapest: PTE ÁJK. Szentgáli-Tóth, B. (2021) Vigyázó szemetek Párizsra vessétek – avagy pénzbírsággal, vagy anélkül a gyűlöletbeszéd ellen? (Keep your eyes on Paris - or against hate speech with or without fines?), Jogi Fórum Sajtó és Média Blog, June 2021. 52 LEXONOMICA. Török, B. (2013) A gyűlöletbeszéd tartalmának médiajogi mércéi (Media law standards for the content of hate speech), Jogtudományi Közlöny, 2, p. 59-72. Török, B. (2022) Szólásszabadság a közösségi platformokon és a Digital Services Act (Freedom of expression on social platforms and the Digital Services Act), in Az internetes platformok kora (The age of internet platforms), eds.: Török, B. and Ződi, Zs. Budapest: Ludovika, p. 195-208. Török, B. (2017) A szólásszabadság védelmének dinamikája (The dynamics of the protection of freedom of expression), Magyar Jog, 12, p. 721-733. Waldman, Ari E. (2018) The Marketplace of Fake News, Journal of Constitutional Law, 20(4), p. 101- 105. Wall, David. (2008). Cybercrime, Media and Insecurity: The Shaping of Public Perceptions of Cybercrime. International Review of Law, Computers and Technology, 22, p. 45-63. Whyte, C. (2020) Deepfake news: AI-enabled disinformation as a multi-level public policy challenge, Journal of Cyber Policy, 5(2), p. 199-217. About the authors Kitti Mezei, Research Fellow, Centre for Social Sciences, Institute for Legal Studies, e-mail: mezei.kitti@tk.hu Boldizsár Szentgáli-Tóth, Senior Research Fellow, Centre for Social Sciences, Institute for Legal Studies, e-mail: szentgali-toth.boldizsar@tk.hu