This post first appeared on Risk Management Magazine. Read the original article.
Schools
confront many challenges related to students’ safety, from illnesses, bullying
and self-harm to mass shootings. To address these concerns, they are
increasingly turning to a variety of technological options to track students
and their activities. But while these tools may offer innovative ways to
protect students, their inherent risks may outweigh the potential benefits.
Some schools have started using social media monitoring
systems to analyze students’ online activity for potential threats to
themselves or others. These monitoring services scan public social media posts
in specific locations and on certain topics, rather than targeting individual accounts.
Companies providing such services claim that they can help prevent school
shootings, and address other problems like bullying. School officials get
warnings based on customized search results, such as the name of the school or
town, and warnings for incidents like active shooters. They can then evaluate
the situation and determine their response. According to the Brennan Center for
Justice at New York University Law School, at least 63 U.S. schools reported
purchasing social media monitoring software.
Schools have also started using facial recognition systems,
often deciding to install the systems shortly after school shootings or
attempted attacks. These systems compare the faces of people entering the
school or attending school functions against a database of individuals flagged
as potentially dangerous like disgruntled students or former employees, parents
with restricted access to their children, or registered sex offenders, then
issue security alerts if necessary. These reference images may come from the
school’s staff, students, parents or law enforcement.
Some schools are applying surveillance technology not to
protect students from violence, but to track their locations for other
purposes. In response to droves of students leaving football games early, the
University of Alabama began tracking student attendance via a loyalty app that
awards points for attending football games and staying through the fourth
quarter. Students can then redeem their points for tickets to important games.
The University of North Carolina reportedly tracks student athletes to ensure
that they are going to class, while Virginia Commonwealth University recently
instituted a pilot program that uses Wi-Fi to check whether freshmen were
attending required classes. Schools also use location tracking to address
safety concerns, such as monitoring warning signs of isolation or depression
like students rarely leaving their dorm rooms or otherwise deviating from
typical behavior.
Risks of Surveillance
While these measures may be well-intentioned, schools that
use these technologies open themselves to a host of serious risks. A primary
concern is that schools are typically more vulnerable to cyberattack than other
institutions, due partially to their relatively weak security measures. With
schools and their third-party partners collecting and housing so much
potentially valuable personal data, they can become ideal targets for attack.
According to cybersecurity firm Armor, 72 U.S. school districts were hit with
ransomware attacks in 2019, totaling more than 1,000 individual schools across
the country. In one such case, Louisiana Governor John Bel Edwards responded to
ransomware attacks against three school districts by declaring a state of
emergency and allocating state aid to the affected schools.
The limits of the technology and inherent bias can also
create legal and reputation risks for schools using these programs. For
example, in 2018, when the American Civil Liberties Union tested Amazon’s
Rekognition facial recognition software on photos of U.S. lawmakers, it
misidentified 28 members of Congress, matching them with other people’s
mugshots. The results also disproportionately misidentified people of color, an
issue that other surveillance technology exhibits as well. In a similar test in
2019, the software falsely identified 26 California lawmakers as criminals.
The Brennan Center also noted that social media monitoring
programs are worse at understanding non-standard English and other languages,
creating additional potential discrimination issues. Given this, plus the
evidence that schools more severely punish students of color compared to white
students, the Brennan Center wrote, “These factors suggest that social media
monitoring tools are likely to disproportionately tag students of color as
dangerous and that those students will be punished more severely than white
students who are similarly identified.”
Privacy advocates also point out that opting out of some of
these initiatives is difficult or impossible, but even in cases where
participation is voluntary (such as the University of Alabama’s football
attendance tracker), public universities keeping tabs on students’ location is
essentially an arm of the government tracking them with unclear parameters—a
potential privacy violation that may even have a chilling effect on student
political activities.
In places where opting out of surveillance measures is not
possible, this could make attending public school a tacit approval of having
one’s online life scrutinized, as well as having one’s biometric data
collected, stored and possibly shared by private companies that are not
accountable to taxpayers. Regarding facial recognition systems, in some cases,
the individuals added to these databases are also not informed and have no
recourse to challenge their inclusion, since the schools control whom they deem
a concern.
Costs and Resource Allocation
As part of its Youth Violence Project, researchers at the
University of Virginia’s Curry School of Education and Human Development cited
numerous studies showing that increased security measures do not substantially
increase school safety. Instead, they often make students feel less safe at
school. In addition, the Brennan Center found that none of the social media
monitoring software manufacturers selling these programs to schools have
provided evidence that their products actually prevented violence thus far.
False positives can also lead to schools wasting time and resources
investigating activity that may ultimately be innocuous. With surveillance
technology potentially prone to errors, these costs could add up quickly.
The money school districts are spending on these
technologies often comes from funds earmarked for general security measures,
not specifically for facial recognition, social media or location tracking.
Allocating limited financial resources for these technological solutions may
divert funding from more cost-effective and proven ways to address the dangers
schools face, including training staff and students to better recognize and
address threats. According to the Curry School, “When school funds are diverted
to security, there is less funding available for teachers, mental health
professionals, and prevention services. Educators should question whether they
should sacrifice student support and prevention services in order to fund
security measures of questionable value.”
Regulatory Action
In the EU, regulators have been cracking down on school
surveillance technology, citing the General Data Protection Regulation (GDPR).
In August, Sweden’s Data Protection Authority (DPA) levied
the country’s first GDPR fine against a municipality conducting a facial
recognition pilot in a school class to track student attendance. The DPA fined
the municipality 200,000 krona (approximately $20,800), noting, “The school has
processed sensitive biometric data unlawfully and failed to do an adequate
impact assessment.” While the school argued that it based participation on
student consent, the DPA found that “consent was not a valid legal basis given
the clear imbalance between the data subject and the controller.”
Similarly, in October, France’s Commission Nationale de
l’Informatique et des Libertés (CNIL) warned two high schools in Nice and
Marseille against facial recognition trial programs, saying that the tests
violated GDPR regulations and could not be “implemented legally.” The schools
installed entrance gates that scanned students’ faces, which the CNIL called
“an especially intrusive biometric mechanism,” noting that the schools could
use less intrusive methods like badge checks. While CNIL’s announcement is not
legally binding, parents and teachers’ unions have also filed a lawsuit to stop
the project in Marseille.
As stricter digital privacy laws like the California Consumer Privacy Act go into effect, school districts moving to utilize surveillance technology in the United States may see legal action challenging their use as well.
Adam Jacobson is associate editor of Risk Management.