Listing 1 - 10 of 139 | << page >> |
Sort by
|
Choose an application
"In this book, Amy K. King examines how violence between women in contemporary Caribbean and American texts is rooted in plantation slavery. Analyzing films, television shows, novels, short stories, poems, book covers, and paintings, King shows how contemporary media reuse salacious and stereotypical depictions of relationships between women living within the plantation system to confront its legacy in the present. The vestiges of these relationships--enslavers and enslaved women, employers and domestic servants, lovers and rivals--negate characters' efforts to imagine non-abusive approaches to power and agency. King's work goes beyond any other study to date to examine the intersections of gender, sexuality, race, ethnicity, class, ability, and nationality in U.S. and Caribbean depictions of violence between women in the wake of slavery"--
Choose an application
Women --- Abduction --- Women in popular culture
Choose an application
In recent years, hatred directed against women has spread exponentially, especially in online social media. Although this alarming phenomenon has given rise to many studies both from the viewpoint of computational linguistics and from that of machine learning, less effort has been devoted to analysing whether models for the detection of misogyny are affected by bias. An emerging topic that challenges traditional approaches for the creation of corpora is the presence of social bias in natural language processing (NLP). Many NLP tasks are subjective, in the sense that a variety of valid beliefs exist about what the correct data labels should be; some tasks, for example misogyny detection, are highly subjective, as different people have very different views about what should or should not be labelled as misogynous. An increasing number of scholars have proposed strategies for assessing the subjectivity of annotators, in order to reduce bias both in computational resources and in NLP models. In this work, we present two corpora: a corpus of messages posted on Twitter after the liberation of Silvia Romano on the 9th of May, 2020 and corpus of comments constructed starting from posts on Facebook that contained misogyny, developed through an experimental annotation task, to explore annotators' subjectivity. For a given comment, the annotation procedure consists in selecting one or more chunk from each text that is regarded as misogynistic and establishing whether a gender stereotype is present. Each comment is annotated by at least three annotators in order to better analyse their subjectivity. The annotation process was carried by trainees who are engaged in an internship program. We propose a qualitative-quantitative analysis of the resulting corpus, which may include non-harmonised annotations.
Choose an application
In recent years, hatred directed against women has spread exponentially, especially in online social media. Although this alarming phenomenon has given rise to many studies both from the viewpoint of computational linguistics and from that of machine learning, less effort has been devoted to analysing whether models for the detection of misogyny are affected by bias. An emerging topic that challenges traditional approaches for the creation of corpora is the presence of social bias in natural language processing (NLP). Many NLP tasks are subjective, in the sense that a variety of valid beliefs exist about what the correct data labels should be; some tasks, for example misogyny detection, are highly subjective, as different people have very different views about what should or should not be labelled as misogynous. An increasing number of scholars have proposed strategies for assessing the subjectivity of annotators, in order to reduce bias both in computational resources and in NLP models. In this work, we present two corpora: a corpus of messages posted on Twitter after the liberation of Silvia Romano on the 9th of May, 2020 and corpus of comments constructed starting from posts on Facebook that contained misogyny, developed through an experimental annotation task, to explore annotators' subjectivity. For a given comment, the annotation procedure consists in selecting one or more chunk from each text that is regarded as misogynistic and establishing whether a gender stereotype is present. Each comment is annotated by at least three annotators in order to better analyse their subjectivity. The annotation process was carried by trainees who are engaged in an internship program. We propose a qualitative-quantitative analysis of the resulting corpus, which may include non-harmonised annotations.
Choose an application
This book studies the ways traditional polarized images of women have been used and challenged in the Hispanic world, especially during the 20th century and the beginning of the 21st century by writers and the media, but also in earlier time periods. The chapters analyze the image of women in specific political periods such as Francoism or the Kirchners' administration, stereotypes of women in films in Mexico and Chile, and the representation of women in textbooks, among other topics. Contributions also show how two women writers, in the 17th and the 19th centuries, viewed the role of women in their society.
Women in popular culture --- Popular culture --- Women --- Public opinion
Choose an application
Female offenders --- Violence in women --- Women in popular culture
Choose an application
Sex role. --- Social values. --- Women in popular culture. --- Women --- Miscellanea.
Choose an application
Femininity. --- Feminism. --- Feminist criticism. --- Feminist theory. --- Women in popular culture.
Choose an application
Judgment (Aesthetics). --- Judgment (Ethics). --- Popular culture. --- Women in popular culture.
Choose an application
"Geek Heroines not only tells the stories of fictional and real women, but also explores how they represent changes in societal views of women, including women of color and the LGBTQ community"--
Listing 1 - 10 of 139 | << page >> |
Sort by
|