·6 min read

I Built a Panic PIN Into My Photo Vault App

How I implemented plausible deniability in Inner Gallery with a secondary PIN that opens a fully functional decoy space.

TL;DR

Inner Gallery now has a Panic PIN: a second PIN code that opens a fully functional secondary space with its own encryption key. No data destruction, no suspicious behavior. Just a normal-looking app with different content.

The feature request I kept seeing

Since I launched Inner Gallery, the same request kept coming up in feedback: "What happens if someone forces me to unlock the app?"

A PIN protects against casual snooping. But if someone stands next to you and demands your code, a PIN is worthless. Your options are: refuse (suspicious), comply (your photos are exposed), or fumble with the phone trying to delete things (also suspicious).

I wanted a fourth option: comply, but with nothing to find.

What plausible deniability means in practice

The concept exists in cryptography since TrueCrypt introduced hidden volumes over a decade ago. The idea is simple: two passwords, two sets of data, and no way to prove the second set exists.

GrapheneOS implemented a duress PIN at the OS level that wipes the device clean. Ledger hardware wallets have a similar feature for cryptocurrency. But destruction has a problem: if the attacker knows about duress PINs (and increasingly, they do), a wiped device is itself evidence.

I went a different route.

The approach: a functional decoy

Inner Gallery's Panic PIN doesn't destroy anything. It opens a fully functional secondary space with its own content. You can import photos into it, organize them, use it like the real app. The idea is that it looks completely normal.

The Panic PIN space is a real partition with its own Data Encryption Key, not a filtered view of the main space. The two are cryptographically isolated. Even with full disk access, there's nothing linking one to the other.

From the outside, the app behaves identically whether you enter the real PIN or the Panic PIN. Same animations, same loading time, same interface. I went so far as to add a timing side-channel mitigation: a dummy PBKDF2 derivation runs when verifying the Panic PIN to normalize the verification time. Without it, the faster response for the Panic PIN versus the main PIN would be a detectable difference.

Architecture decisions

Three constraints guided the implementation:

1. Separate encryption keys. The main space uses a DEK derived from the main PIN via PBKDF2 with 600,000 iterations. The Panic PIN space has its own DEK, derived independently. Neither key can decrypt the other's content. This is the same ChaChaPoly encryption used throughout Inner Gallery, all through Apple's CryptoKit framework.

2. No metadata leakage. The encrypted indexes (spaces-index.json, media-index.json) are separate per partition. There's no master index that references both. I learned during a security audit I ran on my own code that metadata is often the weakest link: file counts, timestamps, even file sizes can reveal information.

3. The space must be genuinely usable. A decoy that contains zero photos is obviously a decoy. So the Panic PIN space supports every feature the main app does: import, favorites, sorting, batch operations, sharing with metadata stripping. Users need to populate it with plausible content.

What I didn't build

I considered and rejected several alternatives:

Self-destruct PIN. Enter a code, everything gets wiped. Two problems: data loss is irreversible, and a blank app after entering a PIN is itself suspicious. GrapheneOS's approach makes sense at the OS level where a factory-reset phone is normal. Inside a single app, emptiness is a red flag.

Hidden app icon. Some vault apps disguise themselves as calculators or utility apps. Inner Gallery stays visible because I think honest design matters more than obscurity. The app isn't a secret; the content is.

Remote wipe. Would require a server. Inner Gallery has no server, no accounts, no cloud. That's the whole point.

The timing problem

This was the trickiest part. PBKDF2 key derivation with 600k iterations takes a measurable amount of time. When verifying the main PIN, the app derives the key and then attempts to decrypt the index. When verifying the Panic PIN, it matches the stored hash instantly.

That difference in response time is a side-channel. If someone watches you enter both PINs and notices one takes 200ms longer, they know which is real.

The fix: when the Panic PIN is verified, the app runs a dummy PBKDF2 derivation before responding. Same iterations, same computational cost, same delay. The user experience is identical for both PINs.

This kind of detail matters in privacy-first development. Most apps wouldn't bother. But if you're building a feature specifically for situations where someone is watching you, timing consistency is the bare minimum.

Security features that leak information through timing, power consumption, or behavioral differences aren't security features. They're security theater with a back door.

Who actually needs this

I built the Panic PIN for three scenarios:

Journalists and activists. People who carry sensitive source material and may face device searches at borders or checkpoints. The Committee to Protect Journalists recommends this kind of protection.

Domestic situations. A controlling partner demands to see your phone. This is more common than most developers think about when building privacy tools.

Border crossings. Several countries can legally compel you to unlock your device. A Panic PIN gives you a way to comply without exposing everything.

I don't know which of these applies to any given user. I don't collect analytics, so I have no idea how people use the app. That's by design. I built the feature because the need is real, and the people who need it most are the ones who can't ask for it publicly.

Lessons from building it

Plausible deniability is harder than encryption. Encrypting data is a solved problem. Making it so nobody can prove the encrypted data exists is a different challenge entirely. Every log, every timestamp, every file size difference is a potential leak.

Test with an adversarial mindset. I spent time trying to break my own implementation. Can I detect the Panic PIN space from the file system? Are there timing differences? Does the app behave differently in any observable way? Running your own security audit sounds tedious, but it's the only way to find these gaps.

The feature that matters most ships quietly. No marketing splash, no "AI-powered" branding. The Panic PIN is a toggle in settings. The people who need it will find it. The people who don't will never notice it.

What's next

The Panic PIN is available now in Inner Gallery. It's part of the premium tier (one-time purchase), because building and maintaining security features takes real effort.

I'm exploring on-device object recognition for auto-tagging, as I mentioned in my AI integration piece. But privacy features like the Panic PIN will always take priority over convenience features. That's the trade-off of building a privacy-first product.

Related reading:


Discover Inner Gallery

panic PINplausible deniabilityiOS privacyphoto vaultencryptionInner Galleryduress codemobile security