Share
Explore

Individual Advising Feedback

Win 2023 LAW E553
Individual Advising Feedback
Arjenn Kalsi 👍
Reserach agenda
Trust and safety issues associated with OnlyFans, particularly in relation to the platform's advertising practices on social media and their potential impact on sex worker safety and privacy as well as minor protection (who are obliviously exposed to explicit materials by clicking the link).
The role of technology in shaping the future of sex work and the decriminalization movement.
There are several websites that prohibit any external links in their community guidelines.
Elissa Redmiles is currently faculty at the Max Planck Institute for Software Systems in Germany and her research focuses on understanding and modeling security, privacy, and safety behaviors using computational, economic, and social science methods. She's well-known for her comprehensive research of security advice for users, but also recent research looking at at-risk populations (including sex workers), COVID-19 apps, algorithmic fairness, misinformation, and more. If you're interested, here are some links to talks she’s given recently:
Andrew Bruce 👍
Topic: Comparative policy paper
Starting as historical analysis. But wrap up with the future legislation.
Technical factuality of securing people’s information.
Where we are at: lack of technology experts
The laws that do not work in reality
e.g., Sarbanes-Oxley Act, breach notification laws.
They don’t listen to experts but listen to lobbyists representing tech companies.
In 1995, security experts testified in Congress. —homeland & security experts.
Kyle Kennedy/Sophie Lantz 👍
Research agenda
[Enemy 1: Commercial platforms] Period-tracking apps
Explore the potential harms to women's reproductive health privacy resulting from the widespread collection and sharing of personal data through menstrual tracking apps and other femtech products.
[Enemy 2: Government] Government’s request for data about geo-locational information for a criminal prosecution purpose.
Examine the ways in which women's data privacy can be protected in a post-Roe v. Wade climate, with a particular focus on the potential for data collected by period-tracking apps to be used to investigate or prosecute women who have had abortions.
What would reasonable expectations of privacy test mean here?
Legislative approach
Connect authors to Tina (if they want to)
Isabella Cursino Miyashiro
Topic: Intermediary Liability and Online Services
The interplay between the First Amendment and Section 230
Other authors
Case
Daniel v. Armslist, LLC, 926 N.W.2d 710, cert. denied, 140 S. Ct. 562 (2019).
Digital Services Act
“No condition is attached to obtaining the shield. Section 230 does not require any obligations such as a reasonable moderation effort or a duty to notify relevant authorities of unlawful content. On the other hand, the current version of the EU’s Digital Services Act imposes notice-and-action requirements on hosting services for unlawful content in exchange for liability immunity.”
The Articles 4 to 10 of the Digital Services Act (DSA), which replaces Articles 12 to 15 of the eCommerce Directive 2000/31/EC, outlines the rules for intermediary liability privileges, which generally provide immunity to intermediary services for the third-party content they process as long as they act quickly upon receiving notices of illegal content. The DSA clarifies that this immunity is not lost if the intermediary service performs voluntary preemptive screening or monitoring of content, as long as it is done in good faith and with diligence. The DSA also states that intermediaries shall not be deemed to have lost their immunity solely because they take measures to comply with Union and national laws. See Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC,
Interesting comments on Google v. Gonzalez
Share the template.
Max Del Real
Topic: Do Generative AI Models Constitute Prima Facie Copyright Infringement?
Using a single use case is totally fine!!
1
Interesting comments on Google v. Gonzalez
Ryan Tursi
Topic: Autonomous Vehicle Liability in Washington
It is so interesting to see the recent scholars’ skepticism over tort claims.
Embarking on the limitations of existing legal doctrines to apply to hypothetical (but soon to come true) "full self-driving" systems with no drivers, all passengers.
Exploring various aspects of the issue, including assessing product liability for software defects in automated vehicles, compensation for victims of accidents (automobile insurance).
Jacob Alhadeff
“In its current form, copyright is an immensely powerful monopoly that is anti-competitive, anti-collaborative, and anti-expressive.”
1
The current framework is very nice and crisp.

Rhea Bhatia
GDPR and health data
Interesting, not may articles available about GDPR & health data. Perhaps are they largely governed by national laws?

Eli Sanders

Daniel Parsons 👍
Topic: AI/Machine Learning: Biases in Employment Outcomes
The best literature review!
Article 3
Very large online platforms and very large online search engines
1. This Section shall apply to online platforms and online search engines which have a number of average monthly active recipients of the service in the Union equal to or higher than 45 million, and which are designated as very large online platforms or very large online search engines pursuant to paragraph 4.
Cooper Cuene

Sarah Yelle
Topic: The CRISPR Conundrum: Social, Bioethical, and Legal Implications of Human Genome Editing
Raafi Styonurani
Materialized Harm in Offline of Data Breach
Trent McBride
European
Cameron Eldridge
Both topics are so interesting! 👍
(1) Topic 1
Potential recourses for Victims of platforms' recommendation of harmful content--exploring various legal claims (such as product liability, criminal law, campaign disclosure, sexual harassment, etc.) to determine viability.
It can be challenging to connect the dots between disparate legal fields, but doing so can yield novel and interesting insights.
(2) Topic 2: Social media feed as a self-determination.
From the perspective of the right to self-determination and personality rights, this topic discusses the problem of social media feed being fully controlled by social media companies. Section 230 was enacted to allow user-generated content to freely circulate on the Internet, but in a situation where content is overflowing, the key authority lies in determining the ranking of content, which platforms monopolize. Is there no way to solve the problem of losing agency due to this private power?
This topic takes a very philosophical approach, and I love it. I think U.S. readers should be exposed to this type of reasoning more often. I should say, I really like this topic! If you can't find much evidence in the current literature or case law, I think it's totally fine to make a high-level reasoning statement.
Sources I mentioned
Elon Musk
Empirical research
There has been some research that has tried to challenge online platforms’ amplification of polarized views (denying echo chamber effect?).
TikTok engineers’ paper on recommendation algorithms. Super technical so I wasn’t able to understand it! But just in case.
Elissa Melendez
Topic: Public trust in AI making subjective vs. objective decisions
MIT Moral Machines public perception AI will do in different scenarios such as self-driving cars.
Maybe Yejin Choi’s Delphi model could be an interesting reference about AI machines making a “commonsense” decision?
General suggestion: To provide a comprehensive analysis of this issue, I recommend that you take a deep dive into the literature and examine how other scholars have framed similar questions and formulated methodologies. For this class assignment, I suggest that you focus on reading 3-4 papers that can help you develop a solid framework for your research. Look for papers that discuss similar issues and examine how they have addressed the problem. After reading these papers, consider which framework you were most persuaded by and use it as a starting point for your research.
TSPA ()—Their Job Board is pretty cool.
Trust and Safety Research Conference will be happening close to the TrustCon 2023 organized by the Stanford Internet Observatory and the TSPA. (T.B.A.)
@Inyoung Cheong
Send papers (self-regulation) and Yoshi’s security ethics paper.
Joyce Jia
Topic: Biometric Privacy Law
About the news about Madison Square Garden’s use of biometric facial recognition.
Very limited precedent about the commercial use.
Collect images without consent, retain images longer than necessary.
It’s a discrimination that refuses
accountable.

Calder Thingvold 👍
Collin Burns
Topic: privacy rationales for the limitation of targeted advertising
Pro-targeted advertising arguments
Economic benefits of targeted advertising
First Amendment protections of commercial speech
The "marketplace of ideas" theory and its application in privacy rights protection
Anti-targeted advertising arguments
Epistemic fragmentation caused by targeted advertising
Concerns over political ad targeting and the potential for unduly influencing individuals
European Data Protection Supervisor's recommendation to ban microtargeting of political ads
Criticisms of market-based solutions and platform-based regulation
Additional sources I mentioned
Captive audience theory
This case sounds to be relevant: In (1970), the Court invoked the captive audience doctrine to uphold a statute permitting individuals, with the assistance of the postal service, to prevent the delivery of . Although conceding that the statute impeded the flow of ideas, the Court held that this impediment was subordinate to the right of people in their homes to be free from unwanted material.
The Digital Services Act
“The DSA builds on the comprehensive protection that GDPR already offers to European citizens and adds a new layer to it. In the future, online marketers will therefore have to take into account not only the GDPR and ePrivacy rules (cookies and anti-spam), but also the new restrictions imposed by the DSA, at least if their activity falls within the scope of the DSA (as already indicated, this concerns intermediary services such as social media, marketplaces and search engines).”
Excerpts from t of the DSA
Article 35 Mitigation of risks 1. Providers of very large online platforms and of very large online search engines shall put in place reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified pursuant to Article 34, with particular consideration to the impacts of such measures on fundamental rights. Such measures may include, where applicable: (e) adapting their advertising systems and adopting targeted measures aimed at limiting or adjusting the presentation of advertisements in association with the service they provide.
Article 39 Additional online advertising transparency 1. Providers of very large online platforms or of very large online search engines that present advertisements on their online interfaces shall compile and make publicly available in a specific section of their online interface, through a searchable and reliable tool that allows multicriteria queries and through application programming interfaces, a repository containing the information referred to in paragraph 2, for the entire period during which they present an advertisement and until one year after the advertisement was presented for the last time on their online interfaces. They shall ensure that the repository does not contain any personal data of the recipients of the service to whom the advertisement was or could have been presented, and shall make reasonable efforts to ensure that the information is accurate and complete.
Article 40 Data access and scrutiny 4. Upon a reasoned request from the Digital Services Coordinator of establishment, providers of very large online platforms or of very large online search engines shall, within a reasonable period, as specified in the request, provide access to data to vetted researchers who meet the requirements in paragraph 8 of this Article, for the sole purpose of conducting research that contributes to the detection, identification and understanding of systemic risks in the Union, as set out pursuant to Article 34(1), and to the assessment of the adequacy, efficiency and impacts of the risk mitigation measures pursuant to Article 35.
Article 41 Compliance functions. 1. Providers of very large online platforms or of very large online search engines shall establish a compliance function, which is independent from their operational functions and composed of one or more compliance officers, including the head of the compliance function. That compliance function shall have sufficient authority, stature and resources, as well as access to the management body of the provider of the very large online platform or of the very large online search engine to monitor the compliance of that provider with this Regulation. 3. Compliance officers shall have the following tasks: (b) ensuring that all risks referred to in Article 34 are identified and properly reported on and that reasonable, proportionate and effective risk-mitigation measures are taken pursuant to Article 35.

Roman Hill / Ramita Bains
Topic: How does AI’s content moderation affect children?
This is an interesting topic!
Additional Resources
United States v. American Library Association, Inc., 539 U.S. 194 (2003): The United States Supreme Court ruled that the United States Congress has the authority to require public schools and libraries receiving E-Rate discounts to install web filtering software as a condition of receiving federal funding.
I came across this CS paper () which was relevant to your team’s point. This paper portrays the history of search and addresses the authors’ skeptical view about the combination of search engines and large language models, advocating for “the need for flexible tools that can support diverse modes of usage.”
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.