How does tech decide who is a threat and who is just blowing off steam?
By Benjamin O. Powers
Before the Parkland school shooter murdered 17 people, former friends noticed that his social media posts had a taken a sinister turn. Dakota Mutchler, a 17-year-old junior at Parkland, who used to be close with the shooter told the Associated Press that he remembered Cruz posting on Instagram about things such as killing animals and doing target practice in his backyard.
“He started going after one of my friends, threatening her, and I cut him off from there,” Mutchler told the AP.
Social media screening for schools is new territory for security firms, some of which have waded into the technology following several high profile shootings last year. They aim to work with schools to identify high-risk students through their posts before it’s too late. But social media monitoring and analysis present a number of challenges, not least of which include student privacy, profiling tactics, and best practices for risk assessment.
PikMyKid was originally an app that coordinated children and the adults picking them up from school. The company added a panic button feature that alerts first responders sends blueprints of the targeted school so they don’t come in blind. But the company is also developing a social media monitoring tool. S.M.A.R.T Feeds (Social Media Analysis for Risk & Threats), that notifies school and district safety teams when it identifies the threat. According to PikMyKid, they are developing this at the request of numerous school districts.
Mass shootings have catalyzed interest in such services. An analysis by CNN found that since 1999 there have been 288 school shootings. These shootings have affected more than 200,000 students, according to the Washington Post.
“There is a big disconnect between the conversation and social platform where the kids congregate and the ones the adults in the school congregate,” said PikMyKid CEO Saravana Pat Bhava. “They might as well be on separate planets. We are totally missing the telltale signs of negative behavior unless it festers and pours out into the real world in the form of physical violence.”
Even as Pat Bhava lays out the case for social media monitoring, he recognizes there are challenges. “Distinguishing between empty words and actual safety threats posed by individual users is one of today’s most complex safety challenges.”
This is the crux of some of the inherent tensions within such a product. How does technology, and the people who operate it, determine who poses a threat and who is simply blowing off steam?
SMART Feeds looks for red flags specified by a school or school district. They might include bullying, active shooter hallmarks, predatory activity, and selling drugs. The tech analyzes publicly available information from multiple sources including social media, said Bhava. Then, it analyzes a variety of factors, including hard-coded threat tags with geospatial filters, user-defined tags, link analysis, public records search, sentiment analysis, and natural language processing. School administrators can even define relevant hashtags and flag items of interest for sharing or alerting.
Some companies have already rolled out such products. Social Sentinel, a Vermont-based firm, scans social media posts within a certain area, runs the posts through systems looking for specific indicators, and then alerts school administrators to posts it deems threatening by evaluating “over half a million threat indicators.” According to the company’s site it has “partnered with popular social media platforms” and has “authorized access to over 1 billion public social media posts daily”.
Social Sentinel is used by a number of school districts, including the Flagler County Public Schools in Florida, according to the company’s website, as well as Shawsheen Valley Technical High School in Massachusetts according to NPR. Additionally, the Miami-Dade school system has asked for $30 million from the state of Florida to overhaul its security system, which includes hiring staff to search social media of students, reports NPR.
GeoListening, another such company, monitors, analyzes and reports “social network public postings on school campuses” and provides “needed information to those in a position to intervene or respond to the needs of students.”
While these companies are filling a demand, threat profiling and detection is still a fraught industry, particularly when it comes to social media. One recent study from the founding director of SAFElab at Columbia University said, “algorithms lack the ability to accurately interpret off-line context.”
Social media is a place where “young people go to be young people,” said William Frey, the SAFElab coordinator and a doctoral student in the Columbia University School of Social Work. “They are sharing their lives on social media; happiness, pain, laughter, loss, and at times, aggression. What we know from our research is that context is deeply important.” This becomes particularly difficult when algorithms written by majority populations fail to properly interpret the context of posts from marginalized communities, including racial minorities, LGBT youth, and other subpopulations, he said.
That’s why SAFELab won’t create risk profiles of individuals because even the decision to scrutinize someone might be racially motivated, Frey said. Consider, for example, that the FBI received tips about the Parkland shooter two days before he carried out his attack, but no action was taken.
“On January 5, 2018, a person close to Nikolas Cruz contacted the FBI’s Public Access Line (PAL) tipline to report concerns about him. The caller provided information about Cruz’s gun ownership, desire to kill people, erratic behavior, and disturbing social media posts, as well as the potential of him conducting a school shooting,” the FBI said in a statement. No action was taken, but “the information provided by the caller should have been assessed as a potential threat to life.”
By contrast, black youth may be more vulnerable, Frey said. “The NYPD’s Operation Crew Cut has led to hundreds of indictments of black youth and adults on conspiracy charges, with some indicted on their social media data alone,” Frey said. “Developing digital risk profiles often reproduces the same offline criminalization created through over-policing specific neighborhoods and communities.”