There’s a math theorem that I rather like because I think it appeals to so many situations. I feel this way about that quote from Frankenstein, “And now, once again, I bid my hideous progeny go forth and prosper… blahblah;” I used that repeatedly in school essays, on entrance exams, in acceptance speeches. Some things make little sense to you in-situ but come to mean much more when you apply them to other scenarios. In the Good Regulator Theorem, good regulators are a model of systems that they regulate, and if the model is not a performant echo, then the system is weak, unregulated, and welcome to compromise. In some ways, I feel passwords are “good regulators,” things that model what they manage, because they protect memory (stores of information that you might like to keep private), and in a meta-way, they rely on your memory to ensure their utility.
We often write weak passwords because we have weak memories. So then we write frameworks around them that weaken their ability to perform, their ability to echo the system they model, and thus we introduce our human weakness into an already crippled model of protection. We “salt” and “hash” our passwords but we are still distant from a happy breakfast, to a happy progeny, a product of our genius and not simply an echo of our faults. So what can be done about passwords? What can be done about the memory they protect? How does the weakness of passwords, and of “good regulation,” affection the bio-politics of our contemporary world?
If you forget your passwords often, you’ve probably designed a mnemonic to ensure a memory jog. Countless resources exist to train you toward more complex yet memorable passphrase design, ensuring the persistent protection of those memory stores you would like to keep personally accessible.
And then there’s the alternative approaches to passwords: like physical keys and dongles, password-less solutions that rely on other-app verifications, captchas, 2FA, and virtual token generators, biometric identifiers that tap physical characteristics to provide unique and highly personal identifiers.
“A password manager will take a load off your mind, freeing up brain power for doing productive things rather than remembering a long list of passwords.” ~ How to Geek
But let’s keep in mind that your objective is probably to protect memory, and maybe your “brain power” is best exercised in protecting what is important to you. Introducing intermediaries into you memory process weakens its security, providing other break points in the memory schema, joints of vulnerability that you might not be able to control.
To that end, every few weeks, an announcement is made about compromised sites, mass leaks of private information that could make your data and accounts vulnerable. This week was no different with insecurity about 1Password. While you might question the validity of these exposés as they’ve prematurely tarnished the reputation of still-reputable services, the greater point is that the more dependencies, intermediaries, and frameworks you introduce that separate yourself from controlling your own system, the more you weaken its regulation.
“Good system security involves realistic evaluation of the risks not only of deliberate attacks but also of casual authorized access and accidental disclosure.” ~ Password Security: A Case History
In this case, using a password surrogate can introduce more vulnerability into your system of security, and if you haven’t modeled for that, or don’t understand how it operates enough to consider its weaknesses, then you’re welcoming insecurity into your system, inhibiting your regulation of its operation, and providing for its collapse. Concern for the security of things you hold precious should not only be the consequence of identity theft; it’s important to understand points of digital vulnerability as well as you consider points of physical vulnerability, and while you might not invest in a deep understanding of technology security, there are tools for developing a subtle understanding of operational security.
Otherwise in the news, and of relational note, is another package of “private” information we’ve outsourced to a surrogate, one that now risks compromise as well. A few weeks ago, news broke that Ancestry.com and 23andMe will “cooperate with law enforcement” to provide your personal bio/genealogy info in cases where such a disclosure serves the courts. Kind of a bummer for those of us who entrusted our personal info to companies just to crunch the numbers for us, and give pathetic, but new hypotheses about who we are, where we come from, and what we might suffer from.
For those not in the know, Ancestry allows subscribers some organizational scope on genealogy information and the ability to connect with potential distant relatives; 23andMe collects samples of spit from subscribers who gain a graphical interface to their health risks, projected family ancestry and prospective relatives based on a limited read of mitochondrial DNA extracted from the spittle. Now that both are open to military read, your personal info might be used against you, in a court of law. The risk here being that this “science” is all a bit fuzzy, and you might possibly resemble someone who is much closer to the crime index than yourself. The laws about whether deleting your account will absolve you from these presumptions and scrutiny have yet to be fully defined. There are other more insidious complications as well. If you sought a better genetic profile to provide diagnosis that the health-care system will not allow you to afford, or if you’ve lost someone to tragic circumstance and are looking to reconnect with your family; the vulnerability of your income limitations and your longing for a loved one could put you at risks. Likewise, sites like African Ancestry profile for a particular racial group, and if targeted by law enforcement, their information is all-the-more vulnerable.
The refrain of these news might be, “trust no intermediary”, or “no protective intermediary for your personal data.” One thing you can trust with consistency, is that any system designed by humans is bound to beget in its progeny, its own failures.
If I could write a lemma to the good regulator theorem I might say something like “a feeble regulator is one infused with the weaknesses of its creators.” In the case of passwords and personal info analysis, both theorem and lemma apply.