Google’s Android Lockdown: Are You Really in Control of Your Phone?


Google’s Android Lockdown: Are You Really in Control of Your Phone?


Bitcoin Magazine

Google’s Android Lockdown: Are You Really in Control of Your Phone?

Android, Google’s mobile operating system, announced on August 25 that it will be requiring all app developers to verify their identity with the organization before their apps can run on “certified android devices.”

While this might sound like a common sense policy by Google, this new standard is not just going to be applied to apps downloaded from Google Play store, but all apps, even those “side loaded” — installed directly into devices by side-stepping the Google Play store. Apps of the sort can be found online in Github repositories or on project websites and installed on Android devices directly by downloading the installation files (known as APKs).

What this means is that, if there is an application that Google does not like, be it because it does not conform to its policies, politics or economic incentives, they can simply keep you from running that application on your own device. They are locking down Android devices from running applications not with their purview. The ask? All developers, whether submitting their apps through the Play store or not, need to give their personal information to Google.

The decision begs the question, if you can not run whatever app you want on your device without the permission of Google, then is it really your device? How would you respond if Windows decided you could only install programs from the Microsoft app store?

The move has of course made news in tech and cyber security media and caused quite a stir as it has profound consequences for the free and open web. For years, Android has been touted as an open source operating system, and through this strategy has gained massive distribution throughout the world with users in developing countries where Apple’s “walled garden” model and luxury devices are not affordable.

This new policy will tighten up controls over applications and its developers, and threatens the freedom to run whatever software you like on your own device in a very subversive and legalistic way. Because of Google’s influence over the Android variety of phones, the consequences of this policy are likely to be felt by the majority of users and devices, throughout the world.

Android justifies the policy change with concerns about the cyber security of their users. Malicious apps side-loaded into devices have led to “over 50 times more malware” Android claims in their announcement blog. As a measure of “accountability,” and with the council of various governments throughout the world, Android has decided to take a “balanced approach,” and the language couldn’t be more Orwellian.

“Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety” – Benjamin Franklin

Put in simpler terms, Google is looking to collect the personal information of software developers, centralizing it in its data centers alongside that of all of its users, in order to “protect” users from hackers that Google can’t seem to stop today in the first place.

After all, if Google and Android could actually keep personal user data secure in the first place, this would not be a problem, right?

Google’s solution to user data leaks is to collect more user data, ironically enough, in this case the data of developers who use the Android platform. A remarkable leap of logic, lazy and fundamentally decadent, a sign that they’ve lost their edge and arguably truly forgotten their now scrubbed “don’t be evil” motto.

Information Wants To Be Free

The reality is that Google finds itself trapped by a dilemma set up by the nature of information and the digital age, to quote the 90’s cypherpunk Steward Brand, “information almost wants to be free”.

Every hop that personal data, like your name, face, home address or social security number, makes throughout the internet is an opportunity for it to get copied and leaked. As your information moves from your phone, to a server in your city to another server in a google datacenter, every hop increases the likelihood that your data gets hacked and ends up on the dark web for sale. A thorny problem when user data is the primary business model of a giant like Google who processes it and sells it to advertisers who in turn create targeted ads.

We can measure the veracity of Brand’s information principle by looking at two fascinating statistics, which not too many people seem to talk about oddly enough. The first is the absurd amount of data hacks that have taken place in the last 20 years. For example, the Equifax Data Breach in 2017, affected 147 million Americans, and the National Public Data Breach of 2024 affected over 200 million Americans leading to leaked data including social security numbers which likely ended up for sale in the dark web.

While legendary hacks like that of the Office of Personal Management of the U.S. government, compromised a large amount of the U.S. Government officials at the time, including everything from social security numbers to medical records.

It’s not an exaggeration to say that a majority of Americans have had their data hacked and leaked already, and there’s no easy way to reverse that. How does one change their face, medical history or social security number after all?

The second statistic, which no one seems to connect to the first, is the rise of identity theft and fraud in the United States. Did you know that in 2012, 24 billion dollars’ worth of identity theft were reported? Twice as much as all other forms of theft combined that same year. Business Insider reported at the time from Bureau of Justice statistics that “identity theft cost Americans $24.7 billion in 2012, losses for household burglary, motor vehicle theft, and property theft totaled just $14 billion.” Eight years later that number doubled, costing Americans $56 billion in losses in 2020. Both of these trends continue to grow to this day. It may indeed already be too late for the old identity system which we still rely so heavily on.

Generative AI adds fuel to the fire, in some cases trained in leaked user data with examples of image models able to create high quality images of humans holding fake IDs. As AI continues to improve, it is increasingly capable of fooling humans into thinking they are talking to another human as well, rather than a robot, creating new attack vectors for identity fraud and theft.

Nevertheless, Google insists that if we just collect a bit more personal user data, maybe then the problem will just go away. Convenient for a corporation whose main business model is the collection and sale of such data. Has any other corporation done more damage to civilian privacy than Google btw? Facebook I suppose.

In Cryptography We Trust

Now to be fair to the 2000’s Web2 tech giants the problem of secure identity in the digital age is not easy to solve. The legal structures of our societies around identity were created long before the internet emerged and moved all that data to the cloud. The only real solution to this problem now is actually cryptography, and its application to the trust that humans build in their relationships in the real world, over time.

The 90s cypherpunks understood this, which is why they invented two important technologies, PGP and webs of trust.

PGP

PGP invented in 1991 by Phill Zimmerman, pioneered the use of asymmetric cryptography to solve this fundamental problem of protecting user data privacy while also enabling secure user authentication, identification and secure communication.

How? It’s simple actually, by using cryptography in a similar way as Bitcoin does today to secure over a trillion dollars of value. You have a secure ‘password’ and keep it as secret as possible, you don’t share it with anybody, and your apps use it carefully to unlock services but the password never leaves your phone. We can do this, it works, there’s even custom made hardware to lock down precisely this kind of information. The person or company you want to connect with also creates a secure ‘password’, and with that password we each generate a public address or digital pseudonymous ID.

The company encrypts a message with their password and your public address and sends you a message. Well thanks to the magic of cryptography, you can decrypt that message with your password and the company’s public address. That is all we need to secure the web. These public IDs do not have to reveal any information about you and you could have one for every brand or identity you have online.

Webs Of Trust

But there is also the question of reputation, how do you know that the company you are trying to connect with is who they claim to be? In cyber security this is called a man in the middle attack, where a malicious third party impersonates who you actually wish to connect to.

The way cypherpunks solved this problem in the 90s was by developing the concept of webs of trust, through real world ceremonies called ‘signing parties’.

When we meet in person, we decide that we trust each other or affirm that we already know and trust each other enough to co-sign each other’s public IDs. We give each other a cryptographic vote of confidence — so to speak – weighed by our brand or publicly known nym. This is similar to giving a follow to someone on a public forum like X.com, It is the PGP equivalent to saying ‘I’ve met Bob, I recognize XYZ as his public ID, and I vouch that he is real’.

While this sounds tedious, antiquated and like it would never scale to the whole world, technology has advanced a great deal since the 90’s, in fact this fundamental logic is how the internet is sort of secured today.

Remember that green lock that used to be displayed on every website? That was a PGP-like cryptographic handshake between your computer and the website you were visiting, signed off by some ‘certificate authority’ or third party out on the internet. Those certificate authorities became centralized custodians of public trust and like many other institutions today probably need to be decentralized.

The same logic can be applied to the verification and authentication of APKs, by scaling up webs of trust. In fact in the open source world, software hashed into a unique ID derived from the data of the software, and that hash is signed by developer PGP keys to this day. The software hashes, PGP public IDs and signatures are all published alongside software for people to review and verify.

However if you don’t know whether the PGP public ID is authentic, then the signature is not useful, since it could have been created by an impersonator online. So as users we need a link that authenticates that public ID as belonging to the real world developer of the app.

The good news is that this problem can probably be solved without having to create a global surveillance state giving all our data to the Googles of the world.

For example, if I wanted to download an app from a developer in eastern europe, I likely won’t know him or be able to verify this public ID, but perhaps I know someone that vouched for someone that knows this developer. While I may be three or four hops away from this person, the likelihood that they are real suddenly goes up a lot. Faking three or four hops of connection in a web of trust is very expensive for mercenary hackers looking to score a quick win.

Unfortunately, these technologies have not been adopted widely, beyond the high tech paranoid world, nor gotten as much funding as the data mining business model of most of the web.

MODERN SOLUTIONS

Some modern software projects recognize this logic and are working to solve the problems at hand, making it easy for users to leverage and scale cryptographic webs of trust. Zapstore.dev for example is building an alternative app store secured by cryptographic webs of trust using Bitcoin compatible cryptography, the project is funded by OpenSats, a non profit that funds open source Bitcoin related software development.

Graphene, an Android operating system fork that’s become popular among cyber security enthusiasts, has also implemented an alternative app store that addresses many of these issues without having to DOX app developers, and serves as a high security operating system, looking to solve many of the privacy and security issues in Android today.

Far fetched as it may seem, cryptographic authentication of communication channels and digital identities is the only thing that can protect us from personal data hacks. Entropy and the security created from randomness via cryptography is the only thing AI can not fake. That same cryptography can help us authenticate ourselves in the digital age without having to share our personal data with every intermediary out there, if we use it right.

Whether this new policy by Android is sustained, or whether enough public outcry can stop it and better solutions do get popularized and adopted remains to be seen, but the truth of the matter is clear. There is a better way forward, we just have to see it and choose it.

This post Google’s Android Lockdown: Are You Really in Control of Your Phone? first appeared on Bitcoin Magazine and is written by Juan Galt.



Source link