How KOSA’s ‘Parental Tools’ Mandate Will Almost Certainly Lead To Abuse
Techdirt. 2022-12-06
There is a serious problem in the way many tech-focused bills are drafted these days. Whether it’s a lack of trust or simply a desire to punish, those working on tech-bills are not talking to the right industry people about how things actually work in practice. This leads to simple mistakes like requiring something that seems like a good idea but runs counter to how systems are designed and how they function. When these mistakes are bad enough, they can result in serious security and safety problems.
I previously wrote a post explaining how issues in the drafting of the Kids Online Safety Act will likely result in harm to LGBTQ and other communities if State Attorneys General seek to exploit it for political purposes (something already being encouraged). In going back through the text, I found another significant problem that could impact the security and safety of users. There is a section that requires covered platforms to develop parental tools that would apply to minor accounts, and enable them by default if they believe a user is a child. The problem is that there seems to be little thought into how this would actually be implemented.
This might make sense if we were talking about devices. The assumption is that a parent buys the device and assists in the setup. The device can offer the option to parents at that time and explain the process in the manual. If someone is being unfairly restricted by the device (i.e. not a minor) they most likely can buy their own device. It may also make sense for most paid services, like Netflix or Amazon Prime. There, the subscriber can set up a primary account with controls over sub-accounts. Again, an unfairly restricted person could hopefully get a separate primary account.
However, there are a lot of services where there is no initial contact point with the presumptive parent. Email, for example, is generally a free service that anyone can sign up for. People can then sign up for many other services for free using no more than an email address. Most social media platforms are free, as are most messaging applications and community forums. Most video game distribution platforms, like Epic Games, are free, and many video games, like Fortnite, run on a freemium model. This creates a lot of problems. How do these platforms identify parents and offer them parental tools? How will minors’ accounts be flagged? How will those reaching the age of majority be able to end parental control? How will platforms prevent abuse of these tools by bad actors? How will they retroactively activate parental tools on existing accounts?
Parental tools are by necessity something that enables one user to control another user. This control has to be against the wishes of the minor user when necessary. It also often comes with some degree of surveillance so that the parent knows when the minor may be in trouble. The parental tools mandated in KOSA fit this description. The tools have to allow the parent to control privacy/account settings, restrict purchases, track time on the platform, and other “control options that allow parents to address the harms described in section 3(b).” This last requirement likely has to include some surveillance, because the harms described in section 3(b) are things like bullying, harassment, and sexual exploitation that are most likely to occur in communications, including private communications.
Needless to say, access to these parental tools comes with a good deal of power over accounts that may be lifetime accounts that become very important to users. This puts covered platforms in a difficult situation, because they are required to do something that, if done incorrectly, puts people at risk. Here are some plausible scenarios that could cause problems:
- A minor’s parents are divorced. One is abusive and has lost custody rights. The abusive parent requests or otherwise obtains parental control over the minor’s account.
- A trans teen leaves an unsupportive home at the age of majority. The abusive parents submit a claim to the company stating the teen is still a minor and requesting control over their account.
- An abusive ex to a custodial parent submits a claim to a platform that they are a custodial step-parent of a minor and requests control as a way to continue their abuse of either the parent or the minor.
- An abusive ex (or any other kind of creep) fabricates information that a user is a minor and claims to be a custodial parent, asking for control over the account as a means to stalk the user.
- A bad actor, such as a groomer or a hacker, improperly gains control of an account through the parental tools and uses that control to advance their nefarious activities.
Each of these scenarios creates a tough situation for companies. How do they comply with the law when the information needed to do so (i.e. custody rights) is difficult to obtain and sorting through these issues is time intensive and expensive? Platforms rarely require proof of identity and age, and ignoring the privacy concerns, requiring platforms to gather this information still doesn’t answer the question of custody rights.
One answer might be to just create the tools and put them in an account setting so that a knowledgeable parent could log on with their child and set everything up, but otherwise it would go unused. But the law seems to prevent that. First, there is a requirement that the parental tools be enabled by default if the platform reasonably believes a user is a child. Second, there is a no-dark-patterns section that would probably apply because the requirement is for “readily-accessible and easy-to-use” parental tools. The law appears to require either an affirmative offer or a way of gaining parental control over a minor’s account even if it is against their wishes. For example, maybe a parent wants to restrict a minor’s Fortnite time but they refuse to allow the parent access to their account, either logging out when the parent is around or playing at a friend’s instead.
The problem all comes down to a simple question: how are platforms supposed to offer the parental tools? Everything flows from that. Let’s assume an account gets flagged as a child user, and the most restrictive required parental tools are enabled by default. What happens next? Is the account locked until a parent is present? How does the platform know that it’s a parent? How does the platform know it’s a parent with custody rights? Will the “parent” have to walk through a tool setup before the account works again? How will mistakenly flagged accounts be cleared? Will they have to submit a driver’s license? Pay stub? Mail? How will the platform know those documents aren’t forged? Which way does the platform need to err in order to be reasonable under the law? In favor of enabling control over users to prevent harm to children, or in favor of not allowing control to prevent harm by bad actors? And do the tools let the parent log in to the child’s full account, and therefore have access to everything, or is it a separate login that displays only the parental tools and thereby allows the child some degree of privacy?
Offering parental tools creates a cascade of tough questions that need to be thoroughly thought through so that such systems can be safely designed. Some covered platforms are structured in a way where this can be done relatively easily. For others it may be impossible. KOSA is completely blind to that, and seems to be drafted under the assumption that this is all quite doable by a wide range of internet connected platforms and services. KOSA is well-intentioned, but it’s simply not drafted with these security and safety concerns in mind. It shouldn’t pass.
Matthew Lane is a Senior Director at InSight Public Affairs.