Listing 1 - 10 of 36 | << page >> |
Sort by
|
Choose an application
States increasingly delegate regulatory and police functions to internet intermediaries. This may lead to interference with the right to freedom of expression. In a time when these issues are of particular relevance, 'Intermediary liability and freedom of expression in the EU' provides the reader with a framework to protect the freedom of expression in an online world.
Human rights --- Computer. Automation --- European Union --- Freedom of expression --- Internet --- Liability (Law) --- Liberté d'expression --- Responsabilité (droit) --- Fournisseurs de services Internet --- Law and legislation --- Droit --- liberté d'expression --- internet --- communicatie --- regelgeving --- vrije meningsuiting --- BPB9999 --- communication --- réglementation --- E-books --- Responsabilité (Droit) --- Liberté d'expression --- Responsabilité (droit)
Choose an application
Choose an application
Human rights --- Industrial and intellectual property --- Computer. Automation
Choose an application
Choose an application
Journalism --- Public law. Constitutional law --- Industrial and intellectual property --- Law --- Mass communications --- journalistieke deontologie --- mediarecht --- journalistiek --- Belgium
Choose an application
Freedom of expression is being challenged by disinformation. While governments adopt policies and regulations to fight disinformation on social network services, some argue that freedom of expression is being left behind. The main research question of this thesis is: “To what extent is action by the state possible and necessary to protect freedom of expression of social network users against rules on disinformation?”. To answer this main research question, the research focuses on three sub-research questions. Firstly, how is disinformation regulated on social network services? Secondly, how and to what extent do the rules on disinformation on social network services take into account freedom of expression? And thirdly, how can states intervene to protect freedom of expression on social network services? The research shows that disinformation is regulated primarily by social network services themselves. The public regulations on disinformation fail to adequately incorporate sufficient safeguards for freedom of expression. However, the privately developed rules of content on the social network platforms by those platforms do significantly try to reflect freedom of expression standards. However, rules of content and platform policies lack consistency in protecting freedom of expression. That lack of consistency also exists when different rules on disinformation of the same platform are compared. In order to improve the consistency and consequently the protection of freedom of expression, the research shifts to the question whether states could be forced, on the basis of the theory of positive obligations, to impose minimum standards of freedom of expression on social network platforms. While limiting contractual freedom of two private parties is not without precedent in the case law of the European Court of Human Rights, this thesis argues that the specific nature of SNSs is a sufficient argument for imposing some moderate positive obligations. SNSs in general do take into account freedom of expression in their policies, however, their rules sometimes lack consistency. Throughout the thesis, several remarks may serve as ideas to better protect freedom of expression on social network services.
Choose an application
This dissertation revolves around the impact of the legal requirement to install content filters on the right to freedom of speech of intermediaries. To this end, Article 17 of the recently adopted Copyright Directive will play a central role. The body of the dissertation starts with a general overview of the Digital Single Market Strategy and internet intermediary liability, followed by a discussion specifying automated content filters and copyright and how Article 17 affects the pre-existing e-Commerce Directive's horizontal liability regime. The Copyright Directive creates a vertical intermediary liability regime in copyright by dismantling application of the e-Commerce Directive's safe harbor protection while stating that intermediaries whose services are used to upload infringing content, will in principle be considered as if they performed the "act of communication to the public" themselves. This places intermediaries in a precarious position where they can be held directly liable for the actions of their services’ users, raising the question whether and in what capacity intermediaries can invoke the right to freedom of speech, both under the EU and ECHR, to defend against liability claims. In the conclusion, alternative developments to intermediary liability regulation are suggested, without the requirement of implementing content filters, and without dismantling the cornerstone safe harbor provisions of the e-Commerce Directive’s liability regime. These alternatives draw from academic views and are compared with US intermediary regulation.
Choose an application
Social media platforms, thriving on the dramatic decline in the costs of communication and the dissemination of information in the digital environment, have become the new gatekeepers of today’s public spheres with their significant market presence. However, while freedom of expression has generally enjoyed high levels of protection on online platforms, the critical spread of illegal activities carried out by third parties online raised serious concerns that need to be addressed. Therefore, online platforms are often seen as essential points of control and moderation for online content. The role and influence of social media platforms, mediating most of the online communication and moderating with self-regulatory norms formed through their Terms of Service agreements, leads to concerns about the likelihood of violations of fundamental human rights, particularly freedom of expression. Thus, a potential risk for private censorship arises. Along with legislative measures and safeguards in regard to tackling illegal content, social media companies have their own incentives on how they govern the user-generated content hosted on their platforms. Taking into account of these incentives, this thesis provides an assessment and analysis of private governance on tackling illegal content online.
Choose an application
In today’s digitally driven society, challenges of societal trust and verification of digital content are widespread. With a few technological steps, computer-generated content is developed. At first sight, this development may seem harmless and create opportunities in various industries. However, it incorporates several trade-offs. One of those trade-offs is deepfake non-consensual pornography, which grew exponentially in 2017 after a Reddit user created tailormade synthetic pornographic content of female celebrities. Deepfake pornography is a very damaging type of non-consensual image distribution, however not a new type of non-consensual image distribution. It is a long-standing phenomenon affecting individuals with sexualised photoshopping. However, a significant manifestation is that now pornographic content can be easily made and customised. The simple generation, accessibility, and endless online sharing of deepfake pornography therefore severely impacts the private life and the concept of consent. The personal life of the victim is exposed to public scrutiny without their permission, inducing psychological and reputational impairment, even if the content has poor quality. A crucial point is that the majority of deepfakes are synthetic sexual content of women, disproportionately affecting them and emphasising the gendered dimension of the phenomenon. Therefore, several questions arise when assessing whether current legal landscape effectively approach the non-consensual creation and distribution of synthetic sexual content. Does the GDPR, DSA and the EU AI Act give prospects for the victim, or does it leave them more helpless? Do national criminal laws capture deepfake pornography adequately? And if there is no adequate national legal framework will the European Court of Human Rights, as the pioneer in combating violence against women, decide that the State violated the victims’ human rights?
Choose an application
In the era of Big Data, law enforcement authorities (LEAs) increasingly use open source intelligence (OSINT) for security purposes. The publicly available sources which constitute OSINT often contain sensitive personal data obtained from social media (so-called open source social media intelligence or open SOCMINT). At first sight, the collection and processing of publicly available data appears less problematic than that of closed source data such as private phone calls and correspondence. But a closer look at the use of OSINT by LEAs for security purposes shows serious implications for human rights, especially the right to privacy and data protection. Focusing on the legal framework applicable in the European Union, this thesis examines whether existing safeguards are sufficiently comprehensive and adequate to guarantee that fundamental human rights are effectively protected whenever OSINT, including its SOCMINT component, is used by LEAs for security purposes. It describes recent evolutions in the practices of LEAs powered by OSINT, such as strategic surveillance and predictive policing, and assesses these practices against EU data protection legislation and the related case law of the Court of Justice of the European Union and the European Court of Human Rights. Although the human rights implications of OSINT activities of LEAs are in great part addressed by the existing EU legal framework, our analysis shows that strengthened or additional safeguards may be required in certain areas.
Listing 1 - 10 of 36 | << page >> |
Sort by
|