By Jason Lim
![]() |
And the answer was yes.
"In the weeks and months before an attack, many active shooters engage in behavior that may signal impeding violence. While some of this behavior is intentionally concealed, other actions are observable and ― if recognized and reported ― may lead to a disruption prior to an attack." But…
"Unfortunately, well-meaning bystanders (often friends and family members of the active shooter) may struggle to appropriately categorize the observed behavior as malevolent. They may even resist taking action to report for fear of erroneously labeling a friend or family member as potential killer. Once reported to law enforcement, those in authority may also struggle to decide how best to assess and intervene, particularly if no crime has been committed."
But what if their smartphones could observe and report the said behavior? Your modern smartphones already have the capability to capture use behavior that can validate your identity with a high degree of confidence. For example, my smartphone "knows" that I am the one using it based on my use behavior. If my kid were to use it, the phone would know that someone else was using it. This is just the tip of the iceberg. In fact, there is a whole field called behavioral analytics that's devoted to this iceberg.
Wearables give the devices even more personalized inputs to profile your "habits" not only when you use your devices but throughout the day even when you're not even necessarily carrying them. Most of the behavior would be subconscious, meaning that you can't control it consciously in order to deceive or hide who you are or what you're doing.
Here comes the trolley problem based on this potential use case. What happens when the Apple, Samsung, Google, and other phone manufacturers notice that one of their phones' users is exhibiting malevolent behavior ― as described by the FBI report ― that indicates a strong potential for an active shooter attack? Are they obligated to report to authorities?
Take the phone manufacturers out of the equation. What if the devices themselves can categorize your behavior for certain malevolent intent and ping local authorities accordingly? Since these are personal devices that are engaging in persistent observation, any deviance from a baseline behavioral profile would be much more accurate than some occasional check-in by a third-party human being with casual interest.
On the one hand, this is certainly a privacy invasion. Without consent or notice, the personal device ― that you paid for ― is notifying a third party on the personal state of the owner for the potential of a crime that you might commit. On the other hand, the personal device might be saving the lives of many innocent individuals from your actions.
This is infinitely more difficult than the original trolley problem where you only had to choose between two certainties. Either your child or five strangers get run over by the runaway trolley. But this is choosing between a public safety and individual right to privacy. It's so difficult because it's not an absolute choice. It's based on probabilities of certain outcomes that are necessarily organic and unpredictable. And it's only become a choice recently because technology has made it possible.
A.I. Now, "a research institute examining the social implications of artificial intelligence," summarizes the problem space in the following way: "As artificial intelligence and related technologies are used to make determinations and predictions in high stakes domains such as criminal justice, law enforcement, housing, hiring, and education, they have the potential to impact basic rights and liberties in profound ways."
Let's me slightly tweak the basic problem statement but from the public safety and process fairness perspective. "As artificial intelligence and related technologies are used to make determinations and predictions in high stakes domains such as criminal justice, law enforcement, housing, hiring, and education, they have the potential to enhance public safety, security, inclusion, and fairness in profound ways."
Same thing. Different tones. Ultimately, the answer is "It depends." It depends on the implementation. That's what makes this trolley problem so difficult.
Speaking at Brookings Institution on Dec. 6, Microsoft President Brad Smith invited the Congress to regulate facial recognition technology to "address risks in facial recognition technology, including potential bias, violation of privacy and use of the tech could diminish democratic freedoms and human rights… We should be thoughtful here because facial recognition technology will be important in addressing public safety," he said. "But we need to strike a balance…. between safety and our democratic freedoms."
And that's the key word: balance.
But where would we draw that balance? More importantly, how would we draw that balance? In this case, "how" is much more importantly than "where." But ultimately, it's important to remember that it has to be "our" balance since any balance is contextual to the culture and times.
Jason Lim (jasonlim@msn.com) is a Washington, D.C.-based expert on innovation, leadership and organizational culture